Trial By Error: Professor Crawley’s Bogus BuzzFeed Claims

By David Tuller, DrPH

Tom Chivers’ terrific article on the Lightning Process and Professor Esther Crawley’s SMILE trial in the Archives of Disease in Childhood has received a lot of attention and comment. I wanted to respond to the short sections in which Professor Crawley seeks to justify her methodological choices. Here are the relevant passages:

In the highest-quality medical trials, subjects are blinded €“ they don’t know whether they’re getting the treatment that’s being tested, or what it’s being tested against. It helps stop the results being biased in favour of the treatment. If you can’t blind the trial, said [Professor Jonathan] Edwards, then it’s important to ensure that you measure something that can’t be affected by patients’ perceptions. You can have an unblinded trial and measure everyone’s blood sodium concentration at the end, he said. They can’t do anything to their sodium concentration, so it doesn’t matter if they know whether they’re getting the treatment or not.

And the other way around is fine: If you blind everything so they patients don’t know if they’ve had the treatment, then you can use a subjective measure. But you can’t have an unblinded trial and a subjective outcome. But the SMILE trial was unblinded, and Edwards also pointed out that the primary outcome that was measured was changed from an objective measure, school attendance, to a questionnaire. Edwards said such self-reported measures are often prone to bias, as subjects give the answers they think they are expected to give. For that reason, he believes, the trial is useless.

In response, Crawley said “all the outcomes were collected as planned, but children didn’t like our recommended primary outcome, school attendance, so we used disability.” She added that the primary outcome measure change was made, and reported, before results were collected.

The problems were, in some eyes, made worse by the fact that the methods of the Lightning Process involve making people say that they feel better. All this trial shows is that if you tell people to say they’re better, and then ask them if they’re better, they’ll say they’re better, said Edwards. It’s a textbook case of how not to design a trial.

He claims the SMILE trial’s results also undermine the PACE trial €“ which also used an unblinded trial with subjective outcomes €“ by showing that “the same techniques can get you the same answer for a completely quack therapy based on complete nonsense like standing on pieces of paper and telling your disease to stop€¦

Dorothy Bishop, a professor of developmental neuropsychology at the University of Oxford, told BuzzFeed News she was also concerned about the wisdom of running a trial [into something] that doesn’t seem to have much scientific basis and is commercial, because if you find a result you end up giving huge kudos to something that may not deserve it.

I don’t want to come down like a ton of bricks on Esther Crawley because I think she’s doing her best, she said, but she was concerned about a a mega-placebo effect.

Crawley told BuzzFeed News it was possible that there was some placebo effect involved, but that the questionnaires she used in the trial asked questions about how far you can walk and how much school you attended, rather than simply whether people felt better. She added that self-reported school attendance lined up very well with the schools’ records of attendance.

**********

The concerns expressed by Professor Jonathan Edwards go to the heart of much of the criticism about studies from the biopsychosocial group, they are open-label studies that rely on subjective findings. This is a recipe for producing bias, as Dr. Edwards explains, or a mega-placebo effect, per Professor Bishop’s words, especially since the Lightning Process itself involves telling participants that they can get well if they follow the program’s precepts.

Professor Dorothy Bishop’s presence in the BuzzFeed article is an interesting development. She allowed herself to get roped into the Science Media Centre’s efforts to hype the study, offering cautious praise along with other experts. Perhaps her statements to BuzzFeed represent a recognition that she squandered some of her reputational capital to promote what is clearly bogus research. She should certainly have been embarrassed by her participation in the dog-and-pony show hosted by the SMC, which never seems to miss a chance to highlight new research from members of the biopsychosocial ideological brigades.

In her response to BuzzFeed, Professor Crawley downplays the enormous placebo problem. She acknowledges only that it was possible there could be some placebo effect. She then appears to suggest she has addressed the issue because the questionnaires asked people to estimate numbers, not just qualities of feeling. This is a silly and unconvincing answer. (To be clear, this is Professor Crawley’s response as conveyed by the reporter, and my assumption is that it is an accurate representation of what she said. But I did not talk to Professor Crawley myself.)

As experienced researchers know, people can be notoriously unreliable at estimating and reporting the metrics of personal experience, things like how far they walk, how much they eat, how often they use condoms, how much time they spend reading to their kids. It is not just that they forget; they are also influenced by other factors, like a desire to report improvements, a desire to please the researchers, or a desire to not admit that they ate another chocolate bar or skipped another day or school.

So contrary to Professor Crawley’s implication, just because a questionnaire asks people how far they can walk does not make it an objective measure. It remains a self-reported measure subject to a huge amount of potential bias, including the effect of repeated messages that the intervention, in this case the Lightning Process, will make them better. That’s why objective measures, when available, are a critical means of assessing the accuracy of self-reported responses.

In fact, in some past studies of behavioral and psychological interventions for ME/CFS, participants have worn ankle monitors to measure movement over the course of days or a week. In these studies, participants have reported subjective improvements even as the ankle monitors have demonstrated no evidence of increased movement. In the now-debunked PACE trial, the objective measures, how far people could walk, how fit they were, whether they returned to work, and whether they received benefits, all failed to match subjective reports of success.

Given this context, I was interested at Professor Crawley’s claim in BuzzFeed that schools’ official records of attendance matched the self-reported attendance figures. In the protocols for both the feasibility trial and the full trial, the investigators promised to seek access to these records. But the Archives paper did not mention these records at all, for unexplained reasons.

From Professor Crawley’s statement, it now appears that the investigators did in fact access such data. If so, why were they not included in the paper? In publicly citing these records to affirm the legitimacy of the study’s self-reported findings while failing to include them in the Archives paper, Professor Crawley has raised further questions about the trial’s claims and its methodological lapses.

Professor Crawley also states that the change in the primary outcome measure was made before results were collected. This is true. But it is also true that the change was made after results were collected. Patients in the feasibility trial were recruited starting in September 2010, meaning the first 12-month results would have been in September 2011.

The feasibility study recruited participants through mid-2012. The ethics committee approved both the extension of the feasibility trial into the full trial and the swapping of the outcome measures after that. Then the remaining participants in the full sample were recruited. That means that results were collected both before and after the primary measure change was made. Professor Crawley’s statement is accurate as far as it goes, but it is also incomplete and ambiguous and therefore highly misleading.

Moreover, it is interesting that, according to Professor Crawley’s statement to BuzzFeed, the investigators changed the outcome measures based on what the children thought. It is rather unusual to delegate such key decision-making to trial participants. In any event, it is worth noting that the change purportedly favored by the kids in the study coincidentally allowed the Archives paper to make positive claims for its primary outcome.

What if the children had preferred to retain school attendance at six months as the primary measure? Would Professor Crawley and her colleagues have followed their advice, and reported null results for their primary outcome? Perhaps I’m being too cynical, but that seems very unlikely.

Comments are closed.

Scroll to Top