tag:blogger.com,1999:blog-6886575137375451769.post2697621849670063541..comments2023-08-08T00:53:58.434-07:00Comments on Garth Kroeker: Biases associated with Industry-funded researchGKhttp://www.blogger.com/profile/14714377295981745087noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-6886575137375451769.post-61782017772944230322009-02-06T09:48:00.000-08:002009-02-06T09:48:00.000-08:00Points well taken. Thank you for the excellent com...Points well taken. Thank you for the excellent comment. <BR/><BR/>Occasionally I find interesting studies in smaller, "low-impact" journals. These studies probably were not assembled with the rigour or care required to get published in major journals. I think these findings deserve attention, even if their methods could be weak. Of course, the findings cannot be taken as absolute truths, but perhaps suggestions of ideas to consider further. <BR/><BR/>Conversely, sometimes large, well-designed studies in major journals may be assembled with a fairly orthodox set of hypotheses, and therefore the results, even if accurate, are sometimes not particularly interesting (often these results have already been applied as part of intuitive clinical practice for years anyway); the review panels may themselves be part of a fairly conservative academic orthodoxy. <BR/><BR/>In clinical practice, I find that most of the time, we try "whatever it takes" to help, and usually we have tried the ideas suggested by major studies (e.g. this or that antidepressant, this or that style of psychotherapy). When these standard approaches aren't working very well, we need to find newer ideas--some of the newer ideas are not well-enough proven to get published in a major journal, yet their application may prove to be helpful. Some of these ideas can be found in lesser journals. <BR/><BR/>Yet, of course, we all need to be wary about getting so excited about a new idea, that we are blinded to the fact that it is not truly effective. Sometimes the enthusiasm itself about something new leads to a robust "placebo effect" or "guru effect". This effect may possibly be helpful, but it could also do harm if it is interfering with more effective treatments, or if it costing the person a lot of money and time.GKhttps://www.blogger.com/profile/14714377295981745087noreply@blogger.comtag:blogger.com,1999:blog-6886575137375451769.post-84230005058588275752009-02-05T20:36:00.000-08:002009-02-05T20:36:00.000-08:00You(GK) may want to mention the fact that just bec...You(GK) may want to mention the fact that just because a study is published doesn't mean that it is a reliable resource even if it is industry funded or not.<BR/><BR/>Usually articles go through a review process. Sometimes the review panel picks up problems and rejects publications pending further changes. Sometimes the panel is not very diligent and lets a study slide through to publication without proper review. Or, the authors can choose not to follow the directions of the panel and pursue publication elsewhere. In all these cases once the article is published, the readers receive little information about the articles review process.<BR/><BR/>(It is a definite case of:" If all else fails try, try and try again")<BR/><BR/>Sometimes checking the "impact factor" of a journal, may give readers a better idea of how intensive the review process is. Usually (but not always) journals with a higher impact factor undergo more rigorous screening however this is not always the case. <BR/><BR/>NOTE: It takes a great deal of time and energy to fully analyze a study. And pick out what is significant. Even the best scientists and medical personnel are often fooled by article abstracts if they don't take the time to scrutinize the methods, data and stats.Anonymousnoreply@blogger.com