Trial By Error: More on Cochrane’s New Risk of Bias Tool

By David Tuller, DrPH

As Virology Blog has reported, the lead author of the revised version of Cochrane€™s Risk of Bias tool, published last week in BMJ, is a long-time Bristol University colleague of Professor Esther Crawley. In that capacity, he is a co-author of two high-profile studies that violated key principles of scientific investigation, the Lightning Process study, published by Archives of Disease in Childhood two years ago, and the 2011 school absence study published in BMJ Open.

I have reported at length on the problems with these studies here and here, and in many subsequent posts. In the Lightning Process study, the authors recruited more than half the sample before trial registration, swapped primary and secondary outcome measures after gathering data from these early participants, and failed to disclose these salient details in the published paper. The outcome swap allowed Professor Crawley, Professor Sterne and their colleagues to report positive rather than null findings for their official but revised primary outcome.

In July, after more than a year of investigation, Archives of Disease in Childhood, a BMJ journal, published a €œcorrection” that acknowledged the methodological missteps. Yet the journal still allowed Professor Crawley, Professor Sterne and their colleagues to republish the biased findings from the initial paper.

Last week, Virology Blog sent an open letter protesting this untenable decision to Dr Fiona Godlee, editorial director of BMJ. The letter was signed by 55 scientists, academics and clinicians from Harvard, Stanford, University College London, London School of Hygiene and Tropical Medicine, Berkeley, Columbia, Queen Mary University of London, etc., and expressed dismay at the journal€™s €œscientifically and ethically indefensible€ actions. Some of the signers have subsequently made their own calls for retraction of the Lightning Process study.

*****

Given this background, it is perhaps not surprising that Professor Sterne€™s revised Risk of Bias tool makes it easier for unblinded studies to be identified as having a €œlow risk€ of bias, as the smart folks on the Science For ME forum have noted. Professor Sterne and his co-authors themselves make the point, in their discussion section.

Here€™s what they write:

€œWe expect the refinements we have made to the RoB tool to lead to a greater proportion of trial results being assessed as at low risk of bias, because our algorithms map some circumstances to a low risk of bias when users of the previous tool would typically have assessed them to be at unclear (or even high) risk of bias. This potential difference in judgments in RoB 2 compared with the original tool is particularly the case for unblinded trials, where risk of bias in the effect of assignment to intervention due to deviations from intended interventions might be low despite many users of the original RoB tool assigning a high risk of bias in the corresponding domain. We believe that judgments of low risk of bias should be readily achievable for a randomised trial, a study design that is scientifically strong, well understood, and often well implemented in practice.€

This paragraph packs a lot of problems into a thicket of sentences that are not easily untangled. In discussing the risk of bias in studies with unblinded interventions, the authors only mention bias that might occur when there are €œdeviations from intended interventions.€ While it is true that such deviations can cause bias, they are not the only source of bias in these studies with unblinded interventions. In particular, when studies combine unblinded interventions with self-reported outcomes, the self-reported outcomes are likely to be impacted by expectations, hopes and other aspects of participants€™ relationships to the assigned interventions and/or to the providers of the interventions, even in the absence of deviation from intended or assigned interventions. The possibility or likelihood of such bias is why other fields of medicine do not generally develop guidelines or policy from studies with this specific design–unblinded interventions relying on self-reported outcomes.

Moreover, the statement in the revised Risk of Bias tool’s discussion section appears to reflect a conviction that randomization alone is likely to ensure a €œlow risk€ of bias. One argument presented is that randomized trials have often been €œwell implemented.€ It is hard to know exactly how to respond to this non sequitur. The fact that some or many randomized trials have been €œwell implemented€ seems irrelevant to whether a specific randomized trial, whether of ME/CFS or any other disease or condition, has also been €œwell implemented€ and should be considered at €œlow risk€ of bias. As for being €œwell understood€€¦Yes, it is well understood that randomized trials combining unblinded interventions with self-reported outcomes will generate results that suffer from unknown levels of bias.

Other aspects of this revised Risk of Bias tool are also troubling. The appendix provides detailed guidelines on how to interpret its various planks or decision points. One section covers whether study data were analyzed €œin accordance with a pre-specified analysis plan that was finalized before unblinded outcome data were available for analysis.€ Per the guidelines, €œChanges to analysis plans that were made before unblinded outcome data were available, or that were clearly unrelated to the results (e.g. due to a broken machine making data collection impossible) do not raise concerns about bias in selection of the reported result.€

This expansion or elaboration of the meaning of €œpre-specified€ is significant. The revised Risk of Bias tool considers €œchanges to analysis plans€ to be €œpre-specified€ if they occurred at any time before the unblinding of outcome data. This revision might perhaps be justified or at least acceptable for trials that rely on objective outcomes. Yet it ignores or overlooks what happens in trials with unblinded interventions that rely on self-reported outcomes, like the Lightning Process and PACE trials. In these cases, investigators are likely to be aware of outcome trends long before any outcome data is actually unblinded. No matter what Professor Sterne and his co-authors assert, changes to analysis plans made after the start of such trials should raise real concerns about bias arising from selective outcome reporting.

It bears noting that much research from the biopsychosocial field, including research into ME/CFS and so-called €œmedically unexplained symptoms,€ or MUS, is of this type. So this revision of the Risk of Bias tool could have a huge impact on Cochrane reviews in these domains. The revision will certainly make it much easier for reviewers to scoop up bad science with the good while officially rating it all as being at €œlow risk€ of bias.

The argument adopted by Professor Sterne and his co-authors, that changes made up until the point when outcome data have been unlocked can essentially be deemed €œpre-specified€–has been routinely used by the PACE authors to excuse their own rampant outcome-switching. The argument was further deployed by Lillebeth Larun, the lead author of the contested Cochrane exercise review for ME/CFS, in defending her inflated assessment of the PACE trial. The revised tool€™s fallacious reasoning is also convenient for Professor Sterne himself, since it could presumably be used to overlook the outcome-swapping in his own Lightning Process trial.

Is anyone paying attention here? BMJ? Cochrane? Hello???

Comments are closed.

Scroll to Top