Scientists regularly exaggerate their studies to ‘beautify’ the results, researchers claim.
A study of papers published in six psychiatry journals found more than half of them had been oversold to readers.
Experts may be claiming their results are stronger than they really are in a bid to get published in a competitive field.
And this may in turn influence doctors’ decisions on how they treat patients, the team warned, even though the evidence isn’t strong enough.
Other academics not involved with the study said what it found was ‘depressing but not surprising’, but questioned whether it would actually affect patients.
Researchers at Oklahoma State University sifted through 116 papers in six top psychology and psychiatry journals between 2012 and 2017.
They were looking particularly at the abstract of each study – an introduction laying out a simple breakdown of what the scientists did and what they found.
And they chose only papers which had a finding that was not statistically significant – meaning the evidence is too weak to prove anything.
Some 56 per cent of the papers were exaggerated in the abstract, the team found.
And two per cent had exaggerated titles, 21 per cent had misrepresented results sections and 49 per cent had overblown conclusions.
The researchers, led by Samuel Jellison, said: ‘Those who write clinical trial manuscripts know that they have a limited amount of time and space in which to capture the attention of the reader.
‘Positive results are more likely to be published, and many manuscript authors have turned to questionable reporting practices in order to beautify their results.’
Scientists are essentially allowed to interpret results however they want.
And readers may not realise they have overblown – or ‘spun’ – the results unless they read the entire papers, which can be dozens of pages long.
Professor David Curtis, from University College London, was not involved in the study but said he expected the problem was widespread.
‘This study of papers published in top psychiatry journals shows something which is perhaps not particularly surprising – that authors tend to impart a positive spin to their papers,’ Professor Curtis said.
‘The abstract of the paper is freely available for everybody to see and indeed most people will only read the abstract without looking at the detailed results… I doubt that this applies only to psychiatry journals.
‘My bet would be that they would have obtained similar findings if they had studied papers published in general medical journals.’
Professor Curtis did not agree that the positive spin could influence doctors, adding: ‘I’m not really sure they use these to make important clinical decisions’.
Mr Jellison’s team did not find that scientists funded by companies working in the psychiatry industry were more likely to put a positive spin on their results.
And they found trials of procedures or drugs being compared with placebos were more likely to be affected.
They said scientists have an ‘ethical obligation’ to represent their results fairly and not to blow them out of proportion even if they couldn’t prove what they wanted to.
Statistics expert at The Open University, Professor Kevin McConway, was also not involved but said: ‘I found it pretty depressing to read this piece of research.
‘And what’s even more depressing is that the findings aren’t at all surprising to me.
‘Similar studies of “spin” in medical research papers have been carried out in a range of specialist areas, including cancers, rheumatology, and various types of surgery’.
He added: ‘Of course, it would be best if the “spin” just wasn’t there at all.
‘But that’s going to be difficult to achieve, given the pressures on researchers to publish research, ideally in leading journals.
‘Consciously or unconsciously making the results seem rather “better” than they actually were is an obvious reaction to that pressure.’
The Oklahoma team’s research was published in the journal BMJ Evidence Based Medicine.