Paradoxical vaccines

gene stops hereA new breed of vaccines is on the horizon: they replicate in one type of cell, allowing for their production, but will not replicate in humans. Two different examples have recently been described for influenza and chikungunya viruses.

The influenza virus vaccine is produced by introducing multiple amber (UAG) translation stop codons in multiple viral genes. Cloned DNA copies of the mutated viral RNAs are not infectious in normal cells. However, when introduced into specially engineered ‘suppressor’ cells that can insert an amino acid at each amber stop codon, infectious viruses can be produced. These viruses will only replicate in the suppressor cells, not in normal cells, because the stop codons lead to the production of short proteins which do not function properly.

When inoculated into mice, the stop-codon containing influenza viruses infect cells, and although they do not replicate, a strong and protective immune response is induced. Because the viral genomes contain multiple mutations, the viruses are far less likely than traditional infectious, attenuated vaccines to sustain mutations that allow them to replicate in normal cells. It’s a clever approach to designing an infectious, but replication-incompetent vaccine (for more discussion, listen to TWiV #420).

Another approach is exemplified by an experimental vaccine against chikungunya virus. The authors utilize Eilat virus, a virus that only replicates in insects. The genes encoding the structural proteins of Eilat virus were replaced with those of chikungunya virus. The recombinant virus replicates in insect cells, but not in mammalian cells. The virus enters the latter cells, and some viral proteins are produced, but genome replication does not take place.

When the Eilat-Chikungunya recombinant virus in inoculated into mice, there is no genome replication, but a strong and protective immune response is induced. The block to replication – viral RNA synthesis does not occur – is not overcome by multiple passages in mice. Like the stop-codon containing influenza viruses, the Eilat recombinant virus is a replication-incompetent vaccine.

These are two different approaches to making viruses that replicate in specific cells in culture – the suppressor cells for influenza virus, and insect cells for Eilat virus. When inoculated into non-suppressor cells (influenza virus) or non-insect cells (Eilat virus), a strong immune response is initiated. Neither virus should replicate in humans, but clinical trials have to be done to determine if they are immunogenic and protective.

The advantage of these vaccine candidates compared with inactivated vaccines is that they enter cells and produce some viral proteins, likely resulting in a stronger immune response. Compared with infectious, attenuated vaccines, they are far less likely to revert to virulence, and are easier to isolate.

These two potential vaccine technologies have been demonstrated with influenza and chikungunya viruses, but they can be used for other virus. The stop-codon approach is more universally applicable, because the mutations can be introduced into the genome of any virus. The Eilat virus approach can only be used with viruses whose structural proteins are compatible with the vector – probably only togaviruses and flaviviruses. A similar approach might be used with insect-specific viruses in other virus families.

Why do I call these vaccines ‘paradoxical’? Because they are infectious and non-infectious, depending on the host cell that is used.

Note: The illustration is from a t-shirt, and the single letter code of the protein spells out a message. However the title, ‘the gene stops here’, is wrong. It should be ‘the protein stops here. The 3’-untranslated region, which continues beyond the stop codon, is considered part of the gene.

TWiV 420: Orthogonal vectors

The TWiV gurus describe how to use an orthogonal translation system to produce infectious but replication-incompetent influenza vaccines.

You can find TWiV #420 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 420 (70 MB .mp3, 116 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Fake news and fake science

The Cow PockIn a recent editorial, the New York Times wrote about ‘the breakdown of a shared public reality built upon widely accepted facts’. As a scientist, I am appalled by the disdain for facts shown by many in this country, including the President-Elect. Unfortunately, science is not without its share of fake information.

The Times argues that at one time, nearly everyone had a unified source of news – the proverbial Walter Cronkite. Social media and the internet changed all that, allowing people to have their own sources of news, whether they be real or fake. The web developers in Macedonia who are paid $30,000 a month to spew out fake news are just part of the problem.

The goal of science is to discover how our world works. It’s about finding facts, not fake answers. Yet fake science has always been with us. Not long after Edward Jenner demonstrated vaccination against smallpox using pustules from milkmaids with cowpox, skeptics thought that this process would lead to the growth of cow-parts from the inoculated areas (see illustration). To this day anti-vaxers spew fake science which they claim shows that vaccines are not safe, do not work, or cause autism.

Fake science does not stop with anti-vaxers. There are people who deny climate change (including our President-Elect), despite easily accessible data showing that the trend is real. There are people who, bafflingly, claim that HIV does not cause AIDS, or that Zika virus does not cause birth defects, or that genetically modified plants will cause untold harm to people who consume them. The list of fake science goes on and on. The situation is appalling to any scientist who examines the data and finds clear proof that HIV does cause AIDS, and that Zika virus does cause birth defects.

There is also fake science perpetrated by scientists – those who publish fake data to advance their career. There are so many examples of such science fraud that there is a website to document the inevitable retractions – called RetractionWatch, of course. I find the existence of such a site lamentable.

That fake news can play such a large part in the operation of our society was something I only recognized recently. My initial reaction, as a scientist, was outrage that anyone would want to believe in, and adopt, lies. But this is a naive reaction, not only because bad behavior should always be expected of some humans, but because fake science has surrounded me for my entire career.

Nevertheless, I am a scientist who looks for the truth, and I simply cannot tolerate fabrication, whether in science or politics or in any field.

I don’t know how to solve the fake news and fake science problems. But the Times has a suggestion:

Without a Walter Cronkite to guide them, how can Americans find the path back to a culture of commonly accepted facts, the building blocks of democracy? A president and other politicians who care about the truth could certainly help them along. In the absence of leaders like that, media organizations that report fact without regard for partisanship, and citizens who think for themselves, will need to light the way.

I’m not sure that today’s profit-driven media organizations are the answer to the fake news problem. But I’ve always felt that scientists can help counter fake science. We all need to communicate in some way so that the public sees us as a single voice, advocating the huge role that science plays in our lives. That’s why here at virology blog, and over at MicrobeTV, you’ll always find real science.

TWiV 419: The selfless gene

The TWiVrific gang reveal how integration of a virophage into the nuclear genome of a marine protozoan enhances host survival after infection with a giant virus.

You can find TWiV #419 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 419 (64 MB .mp3, 105 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Altruistic viruses

Cafeteria roenbergensisVirophages (the name means virus eater) were first discovered to replicate only in amoeba infected with the giant mimiviruses or mamaviruses.  They reduce yields of the giant viruses, and also decrease killing of the host cell. Another virophage called mavirus has been found to integrate into the genome of its host and behaves like an inducible antiviral defense system (link to paper).

The host cell of the virophage mavirus is Cafeteria roenbergensis, Cro (pictured), a marine phagotropic flagellate, that is infected with the giant virus CroV (Cafeteria roenbergensis virus). When Cro cells are infected with a mixture of mavirus and CroV, the virophage integrates into the host cell genome. There it remains silent; the cells survive, and no virophage particles are produced. Such cells can be called lysogens, a name applied to bacteria containing integrated bacteriophage genomes, or prophages.

How does the mavirus genome integrate into the Cro cell? The viral genome encodes an integrase, an enzyme that cuts host DNA and inserts a copy of the viral genome. Retroviruses achieve the same feat via an integrase.

When Cro-mavirus lysogens are infected with CroV, the integrated mavirus genome is transcribed to RNA, the viral DNA replicates, and new virus particles are formed. These virophages inhibit the replication of CroV by 100-1000 fold. As a consequence, the host cell population survives.

These findings suggest that the virophage mavirus is altruistic: induction of the integrated genome leads to killing of the host cell, but other members of the cell population are protected. Altruism is not unknown in Nature, but how it evolved is an intriguing question.

All this work was done in a laboratory. It will be necessary to determine if integration of mavirus into Cro cells in the wild has any influence on the ecology of these organisms.

TWiV 418: Of mice and MERS

The TWiVsters describe a new animal model for MERS coronavirus-induced acute respiratory distress syndrome, produced by CRISPR/Cas9 editing of the mouse gene encoding an ortholog of the virus receptor.

You can find TWiV #418 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 418 (63 MB .mp3, 104 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Wow, the research from the CBT/GET crowd in The Netherlands never ceases to amaze. Like the work of their friends in the U.K., each study comes up with new ways to be bad. It’s almost too easy to poke holes in these things. And yet the investigators appear unable to restrain themselves from making extremely generous over-interpretations of their findings–interpretations that cannot withstand serious scrutiny. The investigators always conclude, no matter what, that cognitive and/or behavioral therapies are effective for treating the disease they usually call chronic fatigue syndrome.

That this so-called science manages to get through peer review is astonishing. That is, unless we assume the studies are all peer-reviewed by other investigators who share the authors’ “unhelpful beliefs” and “dysfunctional cognitions” about ME/CFS and the curative powers of cognitive behavior therapy and graded exercise therapy.

Let’s take a quick look at yet another Dutch study of CBT for adolescents, a 2004 trial published in the BMJ. This one offers a superb example of over-interpretation. The small trial, with 71 participants, had two arms. One group received ten sessions of CBT over five months. The other received…a place on a waiting list for treatment. That’s right–they got nothing. Guess what? Those who got something did better on subjective measures at five months than those who got nothing. The investigators’ definitive conclusion: CBT is an effective treatment for sick teens.

I mean, WTF? It’s not hard to figure out that, you know, offering people some treatment is more likely to produce positive responses to subjective questions than offering them a place on a waiting list. That banal insight must be right in the first chapter of Psychological Research for Dummies. Aren’t these investigators presenting themselves as authorities on human behavior? Have they heard of something called the placebo effect?

Here’s what this BMJ study proved: Ten sessions of something lead to more reports of short-term benefits than no sessions of anything. But ten sessions of what? Maybe ten sessions of poker-playing or ten sessions of watching Seinfeld reruns while holding hands with the therapist and singing “The Girl from Ipanema” in falsetto would have produced the same results. Who knows? To flatly declare that their findings prove that CBT is an effective treatment—without caveats or an iota of caution—is a huge and unacceptable interpretive leap. The paper should never have been published in this form. It’s ridiculous to take this study as some kind of solid “evidence” for CBT.

But from the perspective of the Dutch research group, this waiting-list strategy apparently worked so well that they used it again for a 2015 study of group CBT for chronic fatigue syndrome. In this study, providing CBT in groups of four or eight patients worked significantly better than placing patients on a waiting list and providing them with absolutely nothing. Of course, no one could possibly take these findings to mean that group CBT specifically is an effective treatment—except they did.

When I’m reading this stuff I sometimes feel like I’m going out of my mind. Do I really have to pick through every one of these papers to point out flaws that a first-year epidemiology student could spot?

One big issue here is how these folks piggy-back one bad study on top of another to build what appears to be a robust body of research but is, in fact, a house of cards. When you expose the cracks in the foundational studies, the whole edifice comes tumbling down. A case in point: a 2007 Dutch study that explored the effect of CBT on “self-reported cognitive impairments and neuropsychological test performance.” Using data from two earlier studies, the investigators concluded that CBT reduced self-reported cognitive impairment but did not improve neuropsychological test performance.

Which studies was this 2007 study based on? Well, one of them was the very problematic 2004 study I have just discussed–the one that found CBT effective when compared to nothing. The other was the 2001 study in The Lancet that I wrote about in my last post. As I noted, this Lancet study claimed to be using the CDC criteria for chronic fatigue syndrome, but then waived the requirement that patients have four other symptoms besides fatigue. So it was, in effect, a study of a heterogeneous group of people suffering from at least six months of fatigue.

This case definition—six months of fatigue, with no other symptoms necessary—was used in the PACE trial and is known as the Oxford criteria. It has been discredited because it generates heterogeneous populations of people suffering from a variety of fatiguing illnesses. The results of Oxford criteria studies cannot be extrapolated to those with ME/CFS.

The 2007 study relies on the accuracy and validity of the two studies whose data it incorporates. Since those earlier studies violated basic understandings of scientific analysis, the new study is also bogus and cannot be taken seriously.

The PACE authors themselves have perfected this strategy of generating new bad papers by stacking up earlier bad ones. In November, Trudie Chalder demonstrated her personal flair for this technique as co-author of a systematic review of “attentional and interpretive bias towards illness-related information in chronic fatigue syndrome.” The authors’ conclusion: “Cognitive processing biases may maintain illness beliefs and symptoms in people with CFS.” The proposed solution to that would obviously be some sessions of CBT to correct those pesky cognitive processing biases.

Among other problems, Dr. Chalder and her co-authors included data from Oxford criteria studies. By including in the mix these heterogeneous samples of people suffering from chronic fatigue, Dr. Chalder and her colleagues have invalidated their claim that it is a study of the illness known as chronic fatigue syndrome. Of course, Psychological Medicine, which published this new research gem, is the journal that published—and has consistently refused to correct–the PACE “recovery” paper in which participants could get worse but still meet “recovery” thresholds.

The Dutch branch of the CBT/GET ideological brigade has been centered at Radboud University Nijmegen, home base for many years of two of the movement’s leading lights: Dr. Gijs Bleijenberg and Dr. Hans Knoop. Dr. Knoop recently moved to the University of Amsterdam and is currently a co-investigator of FITNET-NHS with Esther Crawley. Dr. Bleijenberg, on the occasion of his own retirement a few years ago, had this to say about his longtime friend and colleague, PACE investigator Michael Sharpe: “Dear Mike, we know each other nearly 20 years. You have inspired me very much in the way you treated CFS. Thanks a lot!”

Indeed. Dr. Bleijenberg and his Dutch colleagues appear to have learned a great deal from their PACE besties. Dr. Bleijenberg and Dr. Knoop demonstrated their own nimble use of language in the 2011 commentary in The Lancet that accompanied the publication of the first PACE results. I discussed this deceptive commentary at length in a post last year, so I won’t regurgitate the whole sorry argument here. But the Dutch investigators themselves are well aware that their claim that thirty percent of PACE participants met a “strict criterion” for recovery is preposterous.

How do I know that Dr. Bleijenberg and Dr. Knoop know this? Because as I documented in last year’s post, claims in the 2011 commentary contradict and ignore statements they themselves made in a 2007 paper that posed this question: “Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome?” (The answer, of course, was yes. Peter White, the lead PACE investigator, was a co-author of the 2007 paper.) Moreover, Dr. Bleijenberg and Dr. Knoop certainly know that the “strict criterion” they touted included thresholds that some participants had already met at baseline—yet they have still refused to correct this statement.

Given that all of these studies present serious methodological concerns, the Dutch Health Council panel considering the science of ME/CFS should be very, very wary of using them to formulate recommendations. The panel should understand that, within the next few months, peer-reviewed analyses of the original PACE data are likely to be published. (Two such analyses—one by the PACE authors themselves, one by an independent group of patients and academic statisticians–have already been published online, without peer review.) The upcoming papers will demonstrate conclusively that the “benefits” reported by the PACE team were mostly or completely illusory—and were obtained only by methodological anomalies like dramatic and unacceptable changes in outcome measures.

In an open letter to The Lancet posted on Virology Blog last February, dozens of prominent scientists and clinicians condemned the PACE study and its conclusions in harsh terms. In the U.K., the First-Tier Tribunal cited this worldwide dismay about the trial’s egregious lapses while demolishing the PACE authors’ excuses for withholding their data. The studies from the Radboud University crowd and their compatriots all rest on the same silly, unproven hypotheses of dysfunctional thinking, fear of activity, and deconditioning, and are just as intellectually incoherent and dishonest.

Should the Health Council produce a report recommending cognitive and behavioral treatments based on this laughable body of “research,” the organization could become an international joke and suffer enormous long-term reputational damage. The entire PACE paradigm is undergoing a very public unraveling. Everyone can now see what patients have seen for years. Meanwhile, biomedical researchers in the U.S., Norway, and elsewhere are narrowing in on the actual pathophysiology underlying ME/CFS.

It would be a shame to see the Dutch marching backwards to embrace scientific illiteracy and adopt an “Earth-is-flat” approach to reality.

*****

And for a special bonus, let’s now take another quick peek at Dr. Crawley’s work. Someone recently e-mailed me a photo of a poster presentation by Dr. Crawley and three colleagues. This poster was shown at the inaugural conference of the U.K. CFS/ME Research Collaborative, or CMRC, held in 2014. The poster was based on information from the same dataset used for Dr. Crawley’s recent Pediatrics study. As I pointed out two posts ago, that flawed study claimed a surprisingly high prevalence of 2 % among adolescents—a figure that drew widespread attention in media reports.

Dr. Crawley has cited high prevalence estimates to argue for more research into and treatment with CBT and GET. And if these prevalence rates were real, that might make sense. However, as I noted, her method of identifying the illness was specious—she decided, without justification or explanation, that she could diagnose chronic fatigue syndrome through parental and child reports of chronic fatigue, and without information from clinical examinations. In fact, after those who appeared to have high levels of depression were removed, the prevalence fell to 0.6 %–although this lower figure is not the one Dr. Crawley has emphasized.

Despite the high prevalence, however, the same dataset showed that adolescents suffering from the illness generally got better without any treatment at all, according to the 2014 poster presentation. Here’s the poster’s conclusion: “Persistent CFS/ME is rare in teenagers and most teenagers not seen in a clinical service will recovery spontaneously.”

Isn’t that great? Why haven’t I seen these hopeful data before? Although the poster predated this year’s Pediatrics paper, the data about very high rates of spontaneous recovery did not make it into that prevalence study. Moreover, the FITNET-NHS protocol and the recruitment leaflet highlight the claim that few adolescents will recover at six months without “specialist treatment” but most will recover if they receive it. Unmentioned is the highly salient fact that this “specialist treatment” apparently makes no long-term difference.

In reality, the adolescents who recovered spontaneously most likely were not suffering from ME/CFS in the first place. Dr. Crawley certainly hasn’t provided sufficient evidence that any of the children in the database she used actually had it, despite her insistence on using the term. Most likely, some unknown number of those identified as having chronic fatigue syndrome in the Pediatrics paper and in the poster presentation did have ME/CFS. But many or most were experiencing what could only be called a bout of chronic fatigue, for unknown reasons.

It is disappointing that Dr. Crawley did not include the spontaneous recovery rate in the Pediatrics paper or in the FITNET-NHS protocol. In fact, as far as I can tell, these optimistic findings have not been published anywhere. I don’t know the rationale for this decision to withhold rather than publish substantive information. Perhaps the calculation is that public reports of high rates of spontaneous recovery would undermine the arguments for ever-more funding to study CBT and GET? Just a guess, of course.

(Esther–Forgive me if I’m mistaken about whether these data have been published somewhere. I have only seen this information in your poster for the inaugural CMRC conference in 2014. If the data have been peer-reviewed and published, I stand corrected on that point and applaud your integrity.)

At the Hamilton, Montana Performing Arts Center, Vincent speaks with three local high school graduates and two high school teachers about how Rocky Mountain Laboratories influenced school science programs and opened up career opportunities.

You can find TWiM #140 at microbe.tv/twim, or listen and watch here.

Right click to download TWiM#140 (47 MB .mp3, 78 minutes)

Subscribe to TWiM (free) on iTunesStitcherAndroidRSS, or by email. You can also listen on your mobile device with the Microbeworld app.

Become a Patron of TWiM!

For World AIDS Day 2016, Vincent speaks with Gary Nabel, Chief Scientific Officer at Sanofi and former Director of the Vaccine Research Institute of NIAID, about his career and his work on HIV vaccines.

You can find this TWiV Special at microbe.tv/twiv, or listen and watch here.

Click arrow to play
Download this TWiV Special (25 MB .mp3, 41 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Last week’s post on FITNET-NHS and Esther Crawley stirred up a lot of interest. I guess people get upset when researchers cite shoddy “evidence” from poorly designed trials to justify foisting psychological treatments on kids with a physiological disease. I wanted to post some additional bits and pieces related to the issue.

*****

I sent Dr. Crawley a link to last week’s post, offering her an opportunity to send her response to Dr. Racaniello for posting on Virology Blog, along with my response to her response. So far, Dr. Racaniello and I haven’t heard back—I doubt we will. Maybe she feels more comfortable misrepresenting facts in trial protocols and radio interviews than in addressing the legitimate concerns raised by patients and confronting the methodological flaws in her research. I hope Dr. Crawley knows she will always have a place on Virology Blog to present her perspective, should she choose to exercise that option. (Esther, are you reading this?)

*****

From reading the research of the CBT/GET/PACE crowd, I get the impression they are all in the habit of peer-reviewing and supporting each others’ work. I make that assumption because it is hard to imagine that independent scientists not affiliated with this group would overlook all the obvious problems that mar their studies—like outcome measures that represent worse health than entry criteria, as in the PACE trial itself. So it’s not surprising to learn that one of the three principal PACE investigators, psychiatrist Michael Sharpe, was on the committee that reviewed—and approved—Dr. Crawley’s one-million-pound FITNET-NHS study.

FITNET-NHS is being funded by the U.K.’s National Institute for Health Research. I have no idea what role, if any, Dr. Sharpe played in pushing through Dr. Crawley’s grant, but it likely didn’t hurt that the FITNET-NHS protocol cited PACE favorably while failing to point out that it has been rejected as fatally flawed by dozens of distinguished scientists and clinicians. Of course, the protocol also failed to point out that the reanalyses of the trial data have shown that the findings published by the PACE authors were much better than the results using the methods they promised in their protocol. (More on the reanalyses below.) And as I noted in my previous post, the FITNET-NHS protocol also misstated the NICE guidelines for chronic fatigue syndrome, making post-exertional malaise an optional symptom rather than a required component—thus conflating chronic fatigue and chronic fatigue syndrome, just as the PACE authors did by using the overly broad Oxford criteria.

The FITNET-NHS proposal also didn’t note some similarities between PACE and the Dutch FITNET trial on which it is based. Like the PACE trial, the Dutch relied on a post-hoc definition of “recovery.” The thresholds the FITNET investigators selected after they saw the results were pretty lax, which certainly made it easier to find that participants had attained “recovery.” Also like the PACE trial, the Dutch participants in the comparison group ended up in the same place as the intervention group at long-term follow-up. Just as the CBT and GET in PACE offered no extended advantages, the same was true of the online CBT provided in FITNET.

And again like the PACE authors, the FITNET investigators downplayed these null findings in their follow-up paper. In a clinical trial, the primary results are supposed to be comparisons between the groups. Yet in the follow-up PACE and FITNET articles, both teams highlighted the “within-group” comparisons. That is, they treated the fact that there were no long-term differences between the groups as an afterthought and boasted instead that the intervention groups sustained the progress they initially made. That might be an interesting sub-finding, but to present “within-group” results as a clinical trial’s main outcome is highly disingenuous.

*****

As part of her media blitz for the FITNET-NHS launch, Dr. Crawley was interviewed on a BBC radio program by a colleague, Dr. Phil Hammond. In this interview, she made some statements that demonstrate one of two things: Either she doesn’t know what she’s talking about and her misrepresentations are genuine mistakes, or she’s lying. So either she’s incompetent, or she lacks integrity. Not a great choice.

Let’s parse what she said about the fact that, at long-term follow-up, there were no apparent differences between the intervention and the comparison groups in the Dutch FITNET study. Here’s her comment:

“Oh, people have really made a mistake on this,” said Dr. Crawley. “So, in the FITNET Trial, they were offered FITNET or usual care for six months, and then if they didn’t make a recovery in the usual care, they were offered FITNET again, and they were then followed up at 2 to 3 years, so of course what happened is that a lot of the children who were in the original control arm, then got FITNET as well, so it’s not surprising that at 2 or 3 years, the results were similar.”

This is simply not an accurate description. As Dr. Crawley must know, some of the Dutch FITNET participants in the “usual care” comparison group went on to receive FITNET, and others didn’t. Both sets of usual care participants—not just those who received FITNET—caught up to the original FITNET group. For Dr. Crawley to suggest that the reason the others caught up was that they received FITNET is, perhaps, an unfortunate mistake. Or else it’s a deliberate untruth.

*****

Another example from the BBC radio interview: Dr. Crawley’s inaccurate description of the two reanalyses of the raw trial data from the PACE study. Here’s what she said:

“First of all they did a reanalysis of recovery based on what the authors originally said they were going to do, and that reanalysis done by the authors is entirely consistent with their original results. [Actually, Dr. Crawley is mistaken here; the PACE authors did a reanalysis of “improvement,” not of “recovery”]…Then the people that did the reanalysis did it again, using a different definition of recovery, that was much much harder to reach–and the trial just wasn’t big enough to show a difference, and they didn’t show a difference. [Here, Dr. Crawley is talking about the reanalysis done by patients and academic statisticians.] Now, you know, you can pick and choose how you redefine recovery, and that’s all very important research, but the message from the PACE Trial is not contested; the message is, if you want to get better, you’re much more likely to get better if you get specialist treatment.”

This statement is at serious odds with the facts. Let’s recap: In reporting their findings in The Lancet in 2011, the PACE authors presented “improvement” results for the two primary outcomes of fatigue and physical function. They reported that about 60 percent of participants in the CBT and GET arms reached the selected thresholds for “improvement” on both measures. In a 2013 paper in the journal Psychological Medicine, they presented “recovery” results based on a composite “recovery” definition that included the two primary outcomes and two additional measures. In this paper, they reported “recovery” rates for the favored intervention groups of 22 percent.

Using the raw trial data that the court ordered them to release earlier this year, the PACE authors themselves reanalyzed the Lancet improvement findings, based on their own initial, more stringent definition of “improvement” in the protocol. In this analysis, the authors reported that only about 20 percent “improved” on both measures, using the methods for assessing “improvement” outlined in the protocol. In other words, only a third as many “improved,” according to the authors’ own original definition, compared to the 60 percent they reported in The Lancet. Moreover, in the reanalysis, ten percent “improved” in the comparison group, meaning that CBT and GET led to “improvements” in only one in ten participants—a pretty sad result for a five-million-pound trial.

However, because these meager findings were statistically significant, the PACE authors and their followers have, amazingly, trumpeted them as supporting their initial claims. In reality, the new “improvement” findings demonstrate that any “benefits” offered by CBT and GET are marginal. It is preposterous and insulting to proclaim, as the PACE authors and Dr. Crawley have, that this represents confirmation of the results reported in The Lancet. Dr. Crawley’s statement that “the message from the PACE trial is not contested” is of course nonsense. The PACE “message” has been exposed as bullshit—and everyone knows it.

The PACE authors did not present their own reanalysis of the “recovery” findings—probably because those turned out to be null, as was shown in a reanalysis of that data by patients and academic statisticians, published on Virology Blog. That reanalysis found single-digit “recovery” rates for all the study arms, and no statistically significant differences between the groups. Dr. Crawley declared in the radio interview that this reanalysis used “a different definition of recovery, that was much harder to reach.” And she acknowledged that the reanalysis “didn’t show a difference”—but she blamed this on the fact that the PACE trial wasn’t big enough, even though it was the largest trial ever of treatments for ME/CFS.

This reasoning is specious. Dr. Crawley is ignoring the central point: The “recovery” reanalysis was based on the authors’ own protocol definition of “recovery,” not some arbitrarily harsh criteria created by outside agitators opposed to the trial. The PACE authors themselves had an obligation to provide the findings they promised in their protocol; after all, that’s the basis on which they received funding and ethical permission to proceed with the trial.

It is certainly understandable why they, and Dr. Crawley, prefer the manipulated and false “recovery” data published in Psychological Medicine. But deciding post-hoc to use weaker outcome measures and then refuse to provide your original results is not science. That’s data manipulation. And if this outcome-switching is done with the intent to hide poor results in favor of better ones, it is considered scientific misconduct.

*****

I also want to say a few words about the leaflet promoting FITNET-NHS. The leaflet states that most patients “recover” with “specialist treatment” and less than ten percent “recover” from standard care. Then it announces that this “specialist treatment” is available through the trial—implicitly promising that most of those who get the therapy will be cured.

This is problematic for a host of reasons. As I pointed out in my previous post, any claims that the Dutch FITNET trial, the basis for Dr. Crawley’s study, led to “recovery” must be presented with great caution and caveats. Instead, the leaflet presents such “recovery” as an uncontested fact. Also, the whole point of clinical trials is to find out if treatments work—in this case, whether the online CBT approach is effective, as well as cost-effective. But the leaflet is essentially announcing the result–“recovery”—before the trial even starts. If Dr. Crawley is so sure that this treatment is effective in leading to “recovery,” why is she doing the trial in the first place? And if she’s not sure what the results will be, why is she promising “recovery”?

Finally, as has been pointed out many times, the PACE investigators, Dr. Crawley and their Dutch colleagues all appear to believe that they can claim “recovery” based solely on subjective measures. Certainly any definition of “recovery” should require that participants can perform physically at their pre-sickness level. However, the Dutch researchers refused to release the one set of data—how much participants moved, as assessed by ankle monitors called actometers–that would have proven that the kids in FITNET had “recovered” on an objective measure of physical performance. The refusal to publish this data is telling, and leaves room for only one interpretation: The Dutch data showed that participants did no better than before the trial, or perhaps even worse, on this measure of physical movement.

This FITNET-NHS leaflet should be withdrawn because of its deceptive approach to promoting the chances of “recovery” in Dr. Crawley’s study. I hope the advertising regulators in the U.K. take a look at this leaflet and assess whether it accurately represents the facts.

*****

As long as we’re talking about the Dutch members of the CBT/GET ideological movement, let’s also look briefly at another piece of flawed research from that group. Like the PACE authors and Dr. Crawley, these investigators have found ways to mix up those with chronic fatigue and those with chronic fatigue syndrome. A case in point is a 2001 study that has been cited in systematic reviews as evidence for the effectiveness of CBT in this patient population. (Dr. Bleijenberg, a co-investigator on the FITNET-NHS trial, was also a co-author of this study.)

In this 2001 study, published in The Lancet (of course!), the Dutch researchers described their case definition for identifying participants like this: “Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS, with the exception of the criterion requiring four of eight additional symptoms to be present.”

This statement is incoherent. (Why do I need to keep using words like “incoherent” and “preposterous” when describing this body of research?) The CDC definition has two main components: 1) six months of unexplained fatigue, and 2) four of eight other symptoms. If you abandon the second component, you can no longer refer to this as meeting the CDC definition. All you’re left with is the requirement that participants have suffered from six months of fatigue.

And that, of course, is the case definition known as the Oxford criteria, developed by PACE investigator Michael Sharpe in the 1990s. And as last year’s seminal report from the U.S. National Institutes of Health suggested, this case definition is so broad that it scoops up many people with fatiguing illnesses who do not have the disease known as ME/CFS. According to the NIH report, the Oxford criteria can “impair progress and cause harm,” and should therefore be “retired” from use. The reason is that any results could not accurately be extrapolated to people with ME/CFS specifically. This is especially so for treatments, such as CBT and GET, that are likely to be effective for many people suffering from other fatiguing illnesses.

In short, to cite any findings from such studies as evidence for treatments for ME/CFS is unscientific and completely unjustified. The 2001 Dutch study might be an excellent look at the use of CBT for chronic fatigue*. But like FITNET-NHS, it is not a legitimate study of people with chronic fatigue syndrome, and the Dutch Health Council should acknowledge this fact in its current deliberations about the illness.

*In the original phrasing, I referred to the intervention mistakenly as ‘online CBT.’