A recent article in UK journal Nature suggests that published trial results may not always match the original trial designs.
One of the very first things any child learns about science is that successful experiments must not only be reproducible but must have what they are investigating clearly defined upfront.
This is certainly the case for clinical trials, where the nature of the measurements that are going to be taken to determine the effect of a treatment should be declared upfront. Part of the reason for this is to prevent researchers trawling through masses of data to find selective values that will give the results they want. Experimental design features like double-blinding are in place to prevent this happening as well.
According to the article in Nature, “outcome-switching” is quite common. This is despite numerous codes of conduct set up to prevent it. Since October, the team in the Centre for Evidence-Based Medicine at the University of Oxford, has been checking the outcomes specified in trial protocols against every report in five top medical journals. Alarmingly, they discovered that most had discrepancies – some major.
The group wrote to each journal in question pointing out the “errors.” Some published full retractions as you’d expect. However, others argued that it was OK to identify the “pre-specified” outcomes after a trial had begun – which seems confused to say the least.
Outcome switching is not tolerated at school – and it should certainly not when lives are at stake. There is nothing wrong with looking at data for hitherto unsuspected patterns but the right thing to do is to follow these discoveries with a properly designed trial – not to pretend you were looking for them all along!
Read more [here]