Trial By Error: So What’s Happening with the MAGENTA Trial?

By David Tuller, DrPH

I’ll be in Bristol later this week for the CFS/ME Research Collaborative’s annual conference. I was not welcome last year, since I was at that point engaged in harshly criticizing the organization for its unwillingness to acknowledge that its deputy chair had falsely accused me of libel. This year, things have changed and both the chair and the new deputy chair have graciously welcomed me.

In any event, given the proximity of the conference to the University of Bristol, it seemed like a good time to take another look at some of the problematic research conducted on children at that august institution. In my past scrutiny of this work, I have focused on a school absence study, a study of the Lightning Process, the ongoing study of online CBT called FITNET-NHS, and various prevalence studies—all of them deeply flawed.

I haven’t paid much or any attention to the ongoing MAGENTA trial. (Full name: Managed Activity Graded Exercise in Teenagers and Pre-Adolescents). But commenters on the Science for ME forum have recently noted that the MAGENTA investigators had folded a feasibility trial into the full trial—a similar strategy to that pursued by the investigators of the Lightning Process study. This prompted me to take a look.

The MAGENTA trial was designed to test graded exercise therapy against activity management as a treatment for kids. (Frankly, the descriptions of the two interventions do not sound all that different to me, except that the first focuses on increasing exercise and the second on increasing overall activity by a similar amount.) It goes without saying that MAGENTA suffers from a major design flaw shared by so many other studies in this field—it is an open-label trial relying solely on subjective or self-reported outcomes, so any results are going to be so potentially fraught with bias as to be uninterpretable.

Before conducting a full-scale study, the MAGENTA investigators decided to conduct a so-called feasibility trial. Among the aims, as reported in the feasibility trial protocol: “To ascertain the feasibility and acceptability of conducting an RCT to investigate the effectiveness and cost-effectiveness of GET compared with activity management for the treatment of CFS/ME in children. We will use the information to inform the design of a full-scale, adequately powered trial.”

According to this protocol, the feasibility trial began in September, 2015. (The protocol was published in 2016.) Yet when the feasibility trial ended, the investigators did not design a new trial. Instead, in March of 2017, they sought—and received—research ethics committee approval to extend this feasibility study into a full trial. After that, the initial trial registration was updated to include information about the full trial. The feasibility trial included 100 participants; the investigators sought to add another 122, for a total in the full trial of 222.

In the feasibility trial protocol, the designated primary outcome was an assessment of the feasibility and acceptability of a full trial. The protocol included lists of the data and questionnaires the investigators planned to collect but did not designate any of them as primary or secondary outcomes for assessing treatment efficacy. Almost all the outcomes were self-reported. School attendance was listed, but it was self-reported attendance, which is subject to bias in a way that official school attendance records are not.

Only after collecting data for the feasibility study did the investigators designate which measures were the primary and secondary outcomes for the full trial. Because they were folding their feasibility study participants into the larger sample, they were thus able to prioritize outcome measures based on actual data from the trial sample—an excellent way to bias the reported findings. In this case, the investigators designated physical function at six months as the primary outcome measure after almost half the full study sample had already provided data.

Physical function might seem like an obvious choice for primary outcome, since it has been a primary outcome in other studies of this illness. However, it was not the only candidate here. Fatigue might have been selected, for example, or one of the other scales. In the feasibility trial of the Lightning Process, school attendance at six months was the primary outcome, although that was demoted to secondary outcome status after the investigators reviewed the feasibility study data. It is certainly possible or perhaps even likely that physical function was selected because, per the feasibility study findings, it generated the most positive results of the various available options.

Moreover, in between the feasibility trial and the full trial the investigators seem to have dropped MAGENTA’s only objective measure—levels of physical activity assessed by accelerometers worn for a week. In the feasibility trial protocol, the investigators noted that accelerometers had been “shown to provide reliable indicators of physical activity among children and adults. Yet the trial registration does not mention accelerometers, so the reason for their absence from the full trial is hard to understand.

Presumably, the investigators found their use as an outcome measure to be either infeasible or unacceptable. MAGENTA is therefore left without a single objective outcome measure. It is worth noting that, in previous studies of this illness, objective measurements of physical activity have failed to corroborate the positive outcomes on self-reported measures. Moreover, investigators in this field have routinely ignored these objective findings and have highlighted instead the better-looking subjective results. In fact, PACE itself dropped the use of similar devices after Dutch researchers found the results did not support claims of improvement. So the MAGENTA investigators’ decision to “disappear” their sole objective measure is not too surprising, whatever their reasons.

In the trial registration, the study is now wrongly labeled as “prospective.” Perhaps it was “prospective” in 2015, when the trial was first registered. But if you have designated your primary and secondary outcome measures or have dropped outcome measures only after almost half your sample has provided data, you cannot legitimately call the overall study “prospective.” An important feature of a “prospective” study is that primary and secondary outcome measures are pre-designated. That did not happen in the MAGENTA trial.

What is going on with the regional REC? Why are its members failing so completely in their oversight function? This is presumably the same REC involved in the other egregious studies from the university. The committee’s self-evident incompetence and its lack of professional understanding of what is required for research to be conducted in an ethical and appropriate fashion is shocking.

In the Lightning Process paper, as I have documented, the investigators similarly received REC permission to extend a feasibility trial while swapping primary and secondary outcomes. The published paper in Archives of Disease in Childhood failed to disclose that the outcome measures were swapped after more than half of the study sample provided data. The journal has posted an opaque notice about these missteps, whose inadequacy I have previously discussed. But the journal’s editor as well as Fiona Godlee, BMJ’s editorial director, have so far failed to fully resolve the issue or take responsibility for publishing this obviously deficient paper in the first place.

Beyond this, commenters on the Science For ME forum also noticed something odd about MAGENTA: Last month, the trial registration was updated again. This time, the start date of the trial was backdated by two years—from September 2015 to January 2013. Huh? The change is not explained, so its meaning or significance is unclear. But it is certainly odd. Is it possible the investigators did not realize at the time of the initial registration in 2015 that their trial had actually started two years earlier?

The MAGENTA trial is expected to finish sometime next year, with publications undoubtedly to follow. But we already know that whatever the results, they will be rife with bias and unable to provide any useful information about treatment options. That this kind of nonsense receives UK taxpayer funding and gets to pose as legitimate research represents a serious breakdown of academic, ethical and financial accountability standards.

Comments on this entry are closed.

  • Rosie Cox 17 September 2018, 6:18 am

    Ugh! At it again! It leaves me seeing red… or at least magenta.

  • AndyS4ME 17 September 2018, 6:48 am

    For those interested in reading the discussion thread that David references above, it can be found here, https://www.s4me.info/threads/magenta-managed-activity-graded-exercise-in-teenagers-and-pre-adolescents-esther-crawley.4808/

  • Lois 17 September 2018, 6:49 am

    bah! yes I remember wearing an ankle thing for the pace trial that was supposed to happen again. but didn’t. it was quite bulky and got some raised eyebrows at work so I was a bit relieved they changed it. With hindsight of course, maybe that was the wrong reaction for me to have…. these days though monitors are so much smaller, it wouldn’t be an issue. and relatively cheap too. with lots of mass market alternatives.

    this REC? have you been talking to them or the national body if there is one about their standards and if they are adhering tot hem? does this slipshod way of working happen with lots of trials? or jsut these ones?

  • Margaret Laverick 17 September 2018, 9:23 am

    Another disgraceful waste of tax payers money! It’s heartbreaking we continue to have to deal with so called experts who disregard the suffering of so many.

  • Jan winters 17 September 2018, 6:05 pm

    Informed consent was not gained from any of the participants. I have copies of the consent info given to the participants and their parents. It all claims that no negative effects have been reported for graded exercise therapy. However, the trial organiser must have known that the PACE trial reported (via page 4 of the 2011 Lancet PACE trial article’s supplementary webappendix) that Severe Adverse reactions had occurred within the group that received GET. This means that all 3 hospitals involved in the early part of the trial failed to obtain informed consent from both parents and their children/young people and thus the trial was unethical.

  • helen richardson 18 September 2018, 4:29 am

    Its rubbish like this “research” taking up the resources that means there is no valid research to help my 14 year old son as he lies in his bed watching his future disappear. These people are not just guilty of appalling research protocols, oversight etc but they are keeping children trapped in this illness by not doing their jobs properly. Thank you, David, yet again for calling this out.

  • Richard "boolybooly" Ensor 18 September 2018, 8:00 am

    The Lightening Process is a disastrously misguided new age NLP pyramid cult.

    The known affects of ME on the nervous system and brain make it undeniable that NLP is not appropriately used for people with ME to encourage them to more activity or denial of symptoms, when they are typically neurologically hypersensitive due to physiologically caused neurological hyperresponsiveness and therefore are psychologically hypersuggestible; which regarding the so called LP experiment also means any kind of self reporting is completely unreliable when influenced all along by those whose shamanic invention is the subject of the assessment.

    That this and other anti-science has gone unremarked by the research ethics committee at Bristol is a matter for serious concern and you are doing the scientific community as a whole a great service in confronting their misconduct David, as well of course as the community of ME patients today and the future of ME treatment the world over.

    So thankyou and please do keep going and help Bristol REC get to the bottom of this problem.

  • Steve Hawkins 18 September 2018, 6:39 pm

    In many cases, when it comes to activity monitoring, it is likely that a good percentage of participants are already routinely uploading all their activity data to Google without knowing it.

    Unless you have turned off location services and access to the built in devices on your phone, it tells Google everywhere you go and every move you make/if you are sitting still etc., so that they can work out from this and your geographical location–near to a cinema, shop, sports location, particular person’s house, etc.–exactly what you are doing, even if you are not clicking links or looking at ads, and don’t even have your browser on: they then determine your likes, predict your future actions, and modify the advertising you see and who they share your data with accordingly.

    From their Privacy Policy section, you can read:

    “Sensor data from your device

    Your device may have sensors that can be used to better understand your location and movement. For example, an accelerometer can be used to determine your speed and a gyroscope to figure out your direction of travel.”

    Thus, for many, if not most of the people taking part in modern clinical trials, their actual real time activity is known to the microsecond–in incredible detail far more than any clinical trial designer would be able to imagine or dare think it was possible to obtain–, by Google already, and it may only be necessary for researchers to be given permission by patients to purchase all your activity for years leading up to a trial, the duration of the trial, and years after it.

    Younger readers will probably be able to know their energy expenditure and miles covered in their entire lives: and so will Google, and its customers, forever.

    [My own phone has built in accelerometers and gyros, but there doesn’t seem to be anything to use them for. Now I know why: they’re for the manufacturers and advertisers: not me.]

    This data exists in obsessional detail only the vampires in Google and the advertising and spying industries seem to appreciate.

    It could, therefore, probably still be established whether any recent trial of GET/CBT or other intervention resulted in any increased activity, simply by downloading the existing data.

    Researchers only have to ask.
    Or are they afraid of what they might find?

    Incidentally, nearly all/all of these ‘clinical trials’, follow the model that drug companies use to simply compare their own product with that of a rival: *they don’t want to find a treatment that cures*: Cures spell financial disaster for drug cos and psychopushers alike.

    The psychology industry is as bad as the drug cos: the CBT/GET promoters are only interested in showing that their own ‘non-cure’ is less disadvantageous than a rival (or non-monetisable) ‘non-cure’. That is why you don’t have any real scientific trials to see if a treatment actually works at pointing the way to a cure. You just have subjective questionnaire comparisons with ‘APT’ or ‘standard GP therapy’, that are designed to blind people with science, so they don’t notice that the treatment promoters are deliberately not addressing the real problem with real objective research aimed at cure.

    Why not ask the ‘Research Collaborative’ about a project to collect patients’ accelerometer data from Google, and put an end to all this nonsense for good?

  • is.gd 24 September 2018, 8:52 pm

    Tshirt-printing-london © 2015 All Rights Reserved.

  • Gryfalcom 12 October 2018, 10:07 am

    Once again, an example of corruption and unprofessional conduct in scientific research, particularly in the field of psychology.

    Imagine if this is the way that we researched a new heart valve or a brain surgery technique! Would you want to go under that knife?

    Per Fink, an extraordinarily ambitious doctor from Denmark has been invited by Columbia to speak at the 4th Columbia Psychosomatics Conference, where they will be offering continuing medical education credits (CMEs) to hear his lecture. It may seem to be trivial to debate seemingly “minor” points about ethical research, but the experience of Karina Hansen at the hands of Per Fink demonstrates where this slippery slope can lead — to kidnapping, assault and torture. Per Fink convinced the government of Denmark that they could together gain international fame by promoting a new psychosomatic diagnostic criteria on the world stage. He came up with the new one “Bodily Distress Syndrome” and convinced Denmark to replace the Myalgic Encephalomyelitis diagnosis with that, and then started work on getting it published in the ICD-11. But, to do that, he needed research subjects. Along comes Karina Hansen, recently diagnosed with M.E. Perfect. But, she wisely wanted to be treated for her condition, not diagnosed with an experimental diagnosis and treated for a condition she did not have. So, Per Fink colluded with the government to have her dragged from her home and locked in his clinic. His treatment of her was from beginning to end an experiment, and now he is presenting these fruits of the forbidden tree, a body of work based on experimenting on a patient who did not consent, for medical education credits at Columbia. The reason that I bring this up is that it may seem trivial to debate over points like folding a feasibility study into full trial, choosing evaluation criteria after the feasibility study or using subjective measures, but these ethical violations and shoddy experimental technique can have very serious consequences when they result in biased research results that can be used to support and achieve political ends — which can end in severe human rights abuses. This is the proverbial slippery slope. In the Karina Hansen case, and in the Justina Pelletier case, we have seen where unethical and unprofessional conduct in the medical field can lead, how “science” can be used as justification for government to commit human rights abuses against individuals. Thank you for holding scientific researchers accountable. It is terrifying to see what happens to real people when when ethics are abandoned in the medical profession, and when science becomes a tool for committing human rights abuses. Even seemingly “harmlesss” therapies like studying the effects of exercise, can have profound impacts on human rights. And, when we blur the lines in scientific research, not only do we present junk as real science, we also create so much haze, that it is impossible to distinguish the real science from the junk. Is the new heart valve surgery just as reliable (or not) as Per Fink’s “Bodily Distress Syndrome” treatments. If so, perhaps we’d be better off not going under that knife. Somehow, we have to be able to tell the real science from the quackery, or this is all for naught.