There is evidence that research studies sponsored by pharmaceutical companies produce biased results. Here is a collection of papers supporting this claim:
http://ajp.psychiatryonline.org/cgi/content/full/162/10/1957
This paper from the American Journal of Psychiatry reports that industry-sponsored studies are 4.9 times more likely to show a benefit for their product.
http://www.ncbi.nlm.nih.gov/pubmed/15588746
In this paper, an association is shown between industry involvement in a study, and the study showing a larger benefit for the industry's product (in this case, with newer antipsychotics).
http://bjp.rcpsych.org/cgi/content/full/191/1/82
In this study, the findings suggest that the direct involvement of a drug company employee in the authorship of a study leads to a higher likelihood of the study reporting a favourable outcome for the drug company product.
http://jama.ama-assn.org/cgi/content/full/290/7/921
This is a very important JAMA article, showing that industry-funded studies are more likely to recommend the experimental treatment (i.e. favouring their product) than non-industry studies, even when the data are the same.
I do not publish this post to be "anti-drug company". I think the pharmaceutical industry is wonderful. The wealth of many of these companies may allow them to do very difficult, hi-tech research with the help of some of the world's best scientists. The industry has produced many drugs that have vastly improved people's lives, and that have saved many lives.
Even the profit-driven-ness of companies can be understandable and healthy...it may lead to economic pressure to produce treatments that are actually effective, and that are superior to the products of the competitors.
Sometimes the research trials necessary to show the benefit of newer treatments require such a large scale that they are very expensive...sometimes only a large drug company actually has enough money to sponsor trials of this type.
BUT...the profit-driven orientation of companies may cause them to take short-cuts to maximize profits...
-marketing efforts can distort the facts about effectiveness of a new treatment
-and involvement in comparative trials by eager, profit-driven industry, very likely biases results, and biases the clinical behaviour of doctors
A solution to some of these problems is a requirement for frank transparency always, when publishing research papers, in terms of industry involvement.
Another solution is to have more government funding for independent, unbiased large-scale clinical trials.
And another solution is for all of us to be better informed about this issue!
2 comments:
You(GK) may want to mention the fact that just because a study is published doesn't mean that it is a reliable resource even if it is industry funded or not.
Usually articles go through a review process. Sometimes the review panel picks up problems and rejects publications pending further changes. Sometimes the panel is not very diligent and lets a study slide through to publication without proper review. Or, the authors can choose not to follow the directions of the panel and pursue publication elsewhere. In all these cases once the article is published, the readers receive little information about the articles review process.
(It is a definite case of:" If all else fails try, try and try again")
Sometimes checking the "impact factor" of a journal, may give readers a better idea of how intensive the review process is. Usually (but not always) journals with a higher impact factor undergo more rigorous screening however this is not always the case.
NOTE: It takes a great deal of time and energy to fully analyze a study. And pick out what is significant. Even the best scientists and medical personnel are often fooled by article abstracts if they don't take the time to scrutinize the methods, data and stats.
Points well taken. Thank you for the excellent comment.
Occasionally I find interesting studies in smaller, "low-impact" journals. These studies probably were not assembled with the rigour or care required to get published in major journals. I think these findings deserve attention, even if their methods could be weak. Of course, the findings cannot be taken as absolute truths, but perhaps suggestions of ideas to consider further.
Conversely, sometimes large, well-designed studies in major journals may be assembled with a fairly orthodox set of hypotheses, and therefore the results, even if accurate, are sometimes not particularly interesting (often these results have already been applied as part of intuitive clinical practice for years anyway); the review panels may themselves be part of a fairly conservative academic orthodoxy.
In clinical practice, I find that most of the time, we try "whatever it takes" to help, and usually we have tried the ideas suggested by major studies (e.g. this or that antidepressant, this or that style of psychotherapy). When these standard approaches aren't working very well, we need to find newer ideas--some of the newer ideas are not well-enough proven to get published in a major journal, yet their application may prove to be helpful. Some of these ideas can be found in lesser journals.
Yet, of course, we all need to be wary about getting so excited about a new idea, that we are blinded to the fact that it is not truly effective. Sometimes the enthusiasm itself about something new leads to a robust "placebo effect" or "guru effect". This effect may possibly be helpful, but it could also do harm if it is interfering with more effective treatments, or if it costing the person a lot of money and time.
Post a Comment