Clinical studies on children have become a regular feature in medical journals, but a new review of such trials has found that about half of them, especially the industry-funded ones, appear to be biased.
The review by a team from the Johns Hopkins Hospital, Maryland, found that 40 to 60 percent of the studies either failed to take steps to minimise risk for bias or to at least properly describe those measures.
The researchers, who examined 150 randomised controlled paediatric trials -- all published in well-regarded medical journals -- said their findings should be taken as an eyeopener and advised medicos to be critical readers of studies, even in highly respected journals.
The report, published in journal Pediatrics, showed that experimental trials sponsored by pharmaceutical or medical-device makers, along with studies that are not registered in a public-access database, had higher risk for bias.
So were trials that evaluate the effects of behavioural therapies rather than medication, the report stated.
"There are thousands of paediatric trials going on in the world right now and given the risk that comes from distorted findings, we must ensure vigilance in how these studies are designed, conducted and judged," said lead researcher Michael Crocetti, a pediatrician at Johns Hopkins Children's Centre.
"Our review is intended as a step in that direction."
The researchers said results of clinical trials, when peer-reviewed and published in reputable medical journals, can influence the practice of medicine and patient care. But, a poorly designed or executed trial can lead researchers to erroneous conclusions about the effectiveness of a drug or a procedure, they said.
Citing the degree of bias risk in the studies they reviewed, the researchers cautioned pediatricians to be critical readers of studies, even in highly respected journals.
The investigators advised that when reading a report on a trial, pediatricians should not merely look at the bottom line but ask two essential questions: How did the researchers reach the conclusion? And, was their analysis unbiased?
Doctors should apply "smell tests", common sense and skeptical judgment about whether the conclusions fit the data, especially when a study boasts dramatic effects or drastic improvement, they said.
For their research, Crocetti and colleagues used the Cochrane Collaboration tool, which assesses risk for bias along six critical aspects including randomisation (randomly assigning patients to different treatments) and masking -- the degree to which neither the patient nor the doctor knows which group of patients is receiving an active drug or intervention versus a placebo.
They found that overall, 41 percent of the 146 trials in the review had improper or poorly described randomisation techniques. Industry-funded trials were six times more likely to have high risk for biased randomisation than government-funded trials or those funded by nonprofit organisations. And past research, the investigators pointed out, has shown that industry-funded trials are four to five times more likely to recommend an experimental drug. "Industry funding is an important driver of medical discovery, but it is critical for investigators involved in such trials to ensure not only that the studies are conceived and executed cautiously with minimum risk for bias, but that any precautions taken against bias are also reported transparently," Crocetti said.
Image: First graders do a spelling drill at Eagleview Elementary school in Thornton, Colorado | Photograph: Rick Wilking/Reuters