By David Tuller, DrPH
A British medical education company has recently disseminated a recruitment ad for a high-profile pediatric study of treatment for what it calls CFS/ME. The recruitment adâ€™s headline describes the intervention being investigated as â€œeffective,â€ without caveat or reservation. (Full headline: â€œChronic fatigue syndrome (CFS/ME): effective home treatment for teenagersâ€)
To back up this assertion, the recruitment ad claims that a previous Dutch study of the intervention reported â€œimpressiveâ€ results, with 65 % in that arm achieving â€œrecovery,â€ compared to only 8 % among other participants. The ad declares that these results were maintained for years. [In fact, the Dutch study reported this “recovery” rate as 66 %.]
This e-mailed recruitment ad is for FITNET-NHS, a trial of online CBT for kids. The UKâ€™s National Institute for Health Research, an arm of the National Health Service, funded this major study, which is seeking to enroll more than 700 children. The recruitment ad was apparently sent to GPsâ€”at least, it was received by one, who passed it along. The company that sent it out, Red Whale, or perhaps Red Whale/GP Update, states that 15,000 primary care practitioners take its courses annually. So perhaps the ad was widely distributed, perhaps not.
Whatever the case, the language in the recruitment ad appears likely to convince GPs that their patients have an excellent chance of cure if theyâ€™re assigned to the treatment arm. And if the GPs embrace that belief, the patients they recruit to the study are likely to share it. In contrast, if patients are recruited through these GPs and are then assigned to the studyâ€™s comparison arm, they are less likely to harbor such high expectations.
Inducing or promoting such pre-conceptions among prospective study participants might not be of such concern in other circumstances. For example, if this were a double-blinded drug trial with biomarkers as the designated outcome measures, a similar recruitment ad would not impact the comparisons between the study arms. (It could perhaps impact the composition of the entire study sample.) But FITNET-NHS is already designed in a way likely to maximize biased responses. Since it is clearly impossible to blind participants and providers to treatment allocation, this is an open-label trial. It also relies on subjective outcomes.
Combining these two traits in a single study can create conditions for so much bias as to render reported findings uninterpretable, which is why other fields of medicine reject such evidence. With its problematic open-label/subjective-outcome design, FITNET-NHS should probably not have been approved and funded in the first place. (In contrast, it is possible to obtain viable data from open-label trials that include objective outcomes, and from blinded trials with subjective outcomes.)
Despite these flaws, the study is underway, so it would have made sense to avoid exacerbating the problem by introducing even more potential bias. Yet the recruitment ad does just that, even before treatment allocation. Not only does the ad seem to promise an unequivocal shot at â€œrecovery,â€ it provides misleading and exaggerated claims about the Dutch study, another open-label trial with self-reported outcomes.
Were the Dutch â€œrecoveryâ€ results really â€œimpressive,â€ as claimed in the FITNET-NHS recruitment ad? Not if you consider that the investigators did not pre-specify the definition of â€œrecovery.â€ They created it after viewing the results, so these were post-hoc findings. Post-hoc findings carry much less weight than pre-specified ones and it is inappropriate to cite them without noting that they are post-hoc. Yet the recruitment ad does not mention this key fact. [A recruitment video on the FITNET-NHS website also includes the â€œrecoveryâ€ claim.]
The Dutch FITNET investigators are longtime associates of members of the PACE team. But even two of the PACE authors, in a Lancet commentary, expressed skepticism about these â€œrecoveryâ€ figures. They noted pointedly that these were post-hoc results and that â€œthe criteria used to define recovery were not stringent.â€ If thatâ€™s the case, why did the FITNET-NHS recruitment ad promote these same results to GPs as â€œimpressiveâ€?
What about the claim that findings from the Dutch online CBT group were maintained at long-term follow up? Thatâ€™s true, as far as it goes. But the recruitment ad does not mention the salient detail that other trial participants scored the same at long-term follow-up as those assigned to the online CBT arm. In other words, the treatment conferred no long-term benefits, according to the study’s findings. A clinical trial is designed to measure results between treatment groups. To highlight within-group findings rather than between-group findings is a deceptive way to report clinical trial results, whether in a peer-reviewed paper or in a recruitment ad.
I addressed the problems with FITNET-NHS and its Dutch predecessor in blog posts in late 2016â€”here and here. (I repeatedly offered the FITNET-NHS team a chance to respond to my criticisms at length on Virology Blog, but never heard back. However, my initial FITNET-NHS post was publicly highlighted in a lecture slide as an example of â€œlibellous blogs.â€ My efforts to obtain an explanation or an apology for this false accusation were ignored by the relevant parties. I would still welcome such an explanation or apology. As I have repeatedly made clear, I would also be happy to post on Virology Blog any documentation proving that criticisms I have made of FITNET-NHS, or any other studies, are inaccurate.)
Investigators are supposed to consider reasonable alternative explanations for their findings. Beyond the bias inherent in the Dutch FITNET study design itself, there is a very reasonable alternative explanation for why those receiving online CBT might have reported improvements: These patients were able to stay at home rather than having to attend in-person treatment sessions. Perhaps they were better able to pace themselves and therefore less likely to exceed their energy thresholds and suffer relapses. The Dutch investigators did not consider this self-evident possibility. Sometimes people are so attached to their own perspective that other logical interpretations never occur to them.
Another problematic issue is that low 8 % â€œrecoveryâ€ rate among those who received â€œusual care.â€ In the Dutch study, the most common forms of â€œusual careâ€ were in-person CBT and GET. Like their UK colleagues, the Dutch investigators have long promoted these two therapies as the treatments of choice, so it is perplexing that participants who received them fared so badly. Because the investigators provided few details about the quantity or quality of these â€œusual careâ€ interventions, the reasons remain unknown. But the poor findings raise concerns about the reliability and validity of claims of treatment effectiveness from earlier studies by members of the same research group.
Who is Red Whale/GP Update, anyway, and why is it providing unreliable and overblown information to physicians to get them to recruit vulnerable kids into a clinical trial? Most of its work appears to involve courses updating GPs on clinical practice. This is the description of the company on its website:
â€œWe are one of the leading providers of primary care medical education in the UK with around 15,000 primary care practitioners attending our courses each year. We specialise in producing courses that are evidence-based, very relevant to everyday practice, and full of action points that delegates can take away and implement immediately.â€ The firm further proclaims itself to be free of pharmaceutical funding.
I canâ€™t find anything on the website that describes the companyâ€™s role in providing recruitment outreach services for clinical trials, although just because I canâ€™t find it doesnâ€™t mean itâ€™s not there. But this recruitment ad misrepresents earlier findings in a way likely to generate a sample of pre-biased study participants. It undermines Red Whale/GP Updateâ€™s self-congratulatory assertion that it produces â€œevidence-basedâ€ materials. Perhaps Red Whale/GP Update is so focused on avoiding drug company influence that it is blind to bad science from powerful non-pharmaceutical interests, including government-funded researchers.
Robert MacPlough says
Dear mister Tuller,
You have done a lot of good work for ME/patients, no question about that.
But only one thing you need to do to make it crucial better from my point of view!
By writing and talking about ME+CFS you keep up of sustain the mix of ME-definitions among scientists and ME/patients. That is why you donÂ´t can interfere among the scientists from the psychologic brigade: they abuse just like that mix of definitions! Keep tight to the definitions of the WHO CD-10 article 93.3 of the neurologic disease and CFS to an other article of the CD-10. By repeating this over and over again (marketing) ME/patients and scientists will accept in the end this distinction. This is the only distinction what is crucial in the way how to investigate this disease. That is why Wessely and Sharpe and others like to mix up the definitions of ME/CFS. They can say that they investigate ME but it really CFS (vague). NEVER, I write Never, bend over to the definition CFS/ME or CFS but stay with the definiton of the WHO.
DonÂ´t forget that during the polio epidenic in =Sweden in 1895/1987 Ivar Wickman discovered a fifth seperated form of polio with symptoms of ME: see his scientific report published in 1907: Ivar Wickman, 1907, BOOK – BeitrÃ¤ge zur Kenntnis der Heine-Medinschen Krankheit – Onderzoek naar Poliomyelitis – 1e x ME-symptoms.
My real name is: Robert A.J. van der Ploeg