Biases in Research
Pharmaceutical Research
We are much more familiar these days with biases in pharmaceutical research studies. A clinical study of a medication treatment is more likely to show an exaggerated beneficial effect, if the study is sponsored by the manufacturer. This doesn't mean industry-sponsored research is "bad," and it doesn't mean that pharmaceutical products are "bad," but it does mean that we have to look with a careful, skeptical eye at research results--not just at impressive tables or graphs, but also at the sources of funding for the study, and the authors' past relationships with the manufacturers. There could indeed be overt "badness" if there are examples of flagrant profiteering on the part of people involved. But the more salient issue, in my opinion, is simply the need to question the authority of results from such studies.
Alternative Medicine
This same critical eye is very much needed for looking at research evidence regarding alternative treatments. There are very strong sales tactics used to market supplements, herbal remedies, and other treatments, and the standards of evidence presented are often much lower than those from pharmaceutical studies. For example, simple testimonial accounts are much more common in alternative medication marketing, as are impressive-sounding but clinically irrelevant scientific or pseudo-scientific claims.
Psychotherapy Too!
We may assume that studies of psychotherapy would be relatively free of these biases. After all, there is no big company that is profiting from psychotherapy!
But we must maintain a critical eye even for studies of psychotherapy. Here are some reasons:
1) A positive study of a psychotherapy technique may not bring obvious financial profit to anyone, but it is likely to increase the prestige of the authors. A big part of the "currency" in a Ph.D. researcher's career relates to impressive publications. A study showing a significant treatment effect of a psychotherapy technique is likely to add to the fame and career advancement of the authors. This career advancement is analogous to direct financial gain.
2) Many psychotherapy researchers have spent many years of study devoted to their therapy technique. Imagine if you had spent 10 years studying a particular thing, and that you had strong feelings about it. You could imagine that you might have a bias in favour of the technique that you had studied all those years. You would really want to show that it works! If a study showed that it didn't work so well, it might lead you to question the value of all those years of your career! In Cialdini's terms, this bias would have to do with "consistency." If someone has been consistently committed to a particular thing for a long time, they are biased to maintain support of that thing, beyond what would otherwise be reasonable. Furthermore, if you had worked all those years studying one particular technique, your social and professional community of peers would be more likely to share similar opinions. You might have frequently attended conferences devoted to your area of specialty. You might have even taught students the technique, who appreciated your help and mentorship. This would lead to Cialdini's "social pressure" effect -- since the people around you support your idea, you will be more likely to hold onto the idea yourself, beyond what would otherwise be reasonable.
3) There is more and more direct financial gain related to therapy techniques. We see a lot of books, self-help guides, paid seminars and workshops, etc. Charismatic marketing, including through publishing of research studies, is likely to increase the financial profit of those involved.
4) In the psychotherapy research community, CBT is the most common modality. CBT is intrinsically easier to research, since it is more easily standardized, the techniques themselves involve a lot of measurement, and the style tends to be more precisely time-limited. CBT is more "scientific" and therefore attracts researchers whose background is more strongly analytical and scientific. There is nothing intrinsically wrong with this , but it leads to more bias in the research. Therapy styles other than CBT are studied less frequently. Therefore there will be fewer positive studies of other styles. This gives the impression that CBT is best. It is not because comparative studies have actually shown it is best. New versions or variations of CBT (with different fancy-sounding names) are also frequently marketed, and often show good results in research, but once again this does not really prove that the techniques are best. The research study becomes an advertising tool for those who have designed the technique.
Conclusion
I do not mean to sound too cynical here...I think that CBT, as well as all other therapy techniques, are interesting, important, and helpful. We should all learn about them, and make use of some of their principles. But I do not think that any one style is necessarily "best." We should not allow biases in research, including simple marketing effects, to cause a large change in our judgment with respect to helping people.
I feel that the more important foundation in trying to help people is spending the time getting to know them, and hearing from the person you are with (whether it be a client, a patient, a family member, or a friend) what type of help they would actually like.
Also, different individual therapists have different personalities, interests, experiences, weaknesses, and skills. I think it is unhealthy for a community of therapists or healers to be pushed into offering a very narrow range of techniques or therapeutic strategies. Instead, I think that the individual talents and strengths of each therapist should be honoured, and there should be room in any health care system to allow for this.
a discussion about psychiatry, mental illness, emotional problems, and things that help
Showing posts with label Pharmaceutical Industry Sponsorship of Research and Medical Education. Show all posts
Showing posts with label Pharmaceutical Industry Sponsorship of Research and Medical Education. Show all posts
Wednesday, March 8, 2017
Wednesday, March 9, 2016
Stimulant Medications for treating ADHD: A comparison
ADHD medication is a big business in the world today. Annual sales of ADHD medication are projected to be 15-20 billion dollars by 2020, increasing at a rate of about 8% per year. To put this in perspective, this is similar to the value of the worldwide market for fresh vegetables
( http://siteresources.worldbank.org/INTPROSPECTS/Resources/GATChapter13.pdf ).
It is an amount of money that would pay for the salaries of
400 000 teachers, each of whom paid $50 000 per year.
A relevant article to look at about this is by Alan Schwartz, published in the New York Times in 2013:
http://www.nytimes.com/2013/12/15/health/the-selling-of-attention-deficit-disorder.html?pagewanted=all&_r=0
I am not meaning this post to be a discussion of the controversies of ADHD diagnosis. Instead, this post will focus mainly about ADHD medication. I think the rising rate of ADHD diagnosis, and the rising rate of stimulant prescription, is a very concerning trend, particularly if these diagnoses and treatments are offered without attending adequately to other biopsychosocial factors, and particularly if these treatments are being offered under the influence of un-recognized biases due to the financial power and influence of the manufacturers.
On the other hand, the rising awareness and acceptance of ADHD can allow those children, adults, and families who are dealing with ADHD-related issues to feel less stigmatized, judged, and unfairly treated. In families, knowledge and acceptance of ADHD can help child-rearing practices to be adapted, so as to avoid a harshly punitive stance towards those children with attention problems.
The newer ADHD medications are, not surprisingly, very popular, frequently prescribed, are often touted as being better than the older medications, and are listed first on medication advice guideline sheets (such as the CADDRA recommendations).
Here is a comparison of costs per day between the different ADHD drugs, looking at a typical full therapeutic dose for an adult. These cost estimates come from a site called "Pharmacy Compass" which searches for the best local prices for medications at pharmacies.
1. Newer drugs (CADDRA considers these to be the only "first line" medications):
Adderall XR 30 mg:$3.91 per day
Biphentin 80 mg:$4.36 per day
Concerta 72 mg:$5.92 per day
Vyvanse 60 mg: $5.14 per day
Strattera 100 mg: $5.51 per day
2. Older drugs (CADDRA considers these "second line"):
Dexedrine spansules 40 mg: $3.59 per day
Ritalin (methylphenidate) 60 mg: $0.81 per day
Ritalin SR 60 mg $0.66 per day
So we see that the least expensive option is methylphenidate or methylphenidate SR. Dexedrine is over 5 times as expensive. Concerta and Vyvanse are about 8 times as expensive, per day.
I mention these expense differences not necessarily in an effort to favour the cheaper medication, but rather to heighten your anticipation that there could be bias in any research results regarding these medications--especially if the research is sponsored by the manufacturers-- due to the huge profit motives involved.
It would be fair to look for studies which carefully and prospectively treat ADHD patients with Ritalin vs. one of the newer medications, in randomized comparisons.
1) Vyvanse vs. Ritalin. Almost no studies in the literature! In one study, all they looked at was whether patients stuck to a dosing regimen, in which case the Vyvanse group did "better." (http://www.ncbi.nlm.nih.gov/pubmed/23937642 ) But this measure had nothing to do with the patients actually feeling better or improving more!
A better study compared Vyvanse with Oros-MPH, a long-acting version of Ritalin (though not plain old Ritalin itself!)
[ http://www.ncbi.nlm.nih.gov/pubmed/23801529]
In this study, at first glance it certainly appears that Vyvanse is better! But looking carefully, one finds statements such as this: "At endpoint, the difference between lisdexamfetamine and OROS-MPH in the percentage of patients with an ADHD-RS-IV total score less than or equal to the mean for their age was not statistically significant." (p.747) This statement was tucked into the results section but left out of the conclusion. Looking at side-effects, we find a lower total rate of adverse effects in the Ritalin group. Reduced appetite, insomnia, and nausea were more common in the Vyvanse group. Notably, there is a long list of conflicts of interest at the end of this paper, including some of the authors being employees of the Vyvanse manufacturer, and owning stocks in the company!
In conclusion here, there is no doubt that Vyvanse is an effective medication for ADHD. The dosing regime is very convenient, which may be particularly effective and helpful for many. But it is not necessarily superior to much cheaper alternatives. For some people (including many patients I have seen), regular methylphenidate (Ritalin) allows better fine control of symptoms during the course of the day, without being "stuck" with a continuous sustained-release effect. For others, they certainly do prefer the Vyvanse. I just think that Vyvanse should not be assumed to be better, as the evidence is very weak that it is, while it is 8 times more expensive than Ritalin!
2) Concerta vs. Ritalin
http://www.ncbi.nlm.nih.gov/pubmed/11389303
This is a good early study, directly comparing the two medications, published in Pediatrics in 2001. Here is the authors' concise summary: "On virtually all measures in all settings, both drug conditions were significantly different from placebo, and the 2 drugs were not different from each other." The reason to choose Concerta over Ritalin would be convenience. The authors do point out that "compliance" is more likely on a long-acting formulation. But remember that "compliance" is a very, very indirect, and possibly irrelevant, measure of health and well-being!! Why is it important that there be better "compliance?" Should the only criteria not be well-being? Certainly this is not a reason to classify Concerta as "better" or "first line". Concerta is 9 times more expensive than Ritalin!
3) Adderall vs Ritalin
http://www.ncbi.nlm.nih.gov/pubmed/10103335
In this study, published in Pediatrics in 1999, Adderall comes out as looking better than Ritalin. But, once again, the study was sponsored by the manufacturer. On a close look, a couple of problems: first, the doses of the medications were fixed. The ritalin doses appear too low, so as not to match the equivalent doses of Adderall given. At this point, one would usually give Ritalin doses at least twice that of Adderall (i.e. 100% higher) but in this study the Ritalin dose was only 40% higher than the Adderall dose. In accordance with this under-dosing, the Adderall group not surprisingly had more side effects such as insomnia.
In conclusion, there is no doubt that Adderall XR is a good medication for ADHD. Many of my patients have preferred it over other alternatives. But it is not fair, once again, to assume that it is better. It does not deserve to be considered "first line" while a similarly-effective alternative that is one-sixth the cost is considered "second line."
4) Meta-analytic comparison:
Faraone and Glatt (2010) have published a good meta-analytic review paper, which is worth reading in detail, with particular attention to the data tables and graphs: http://www.ncbi.nlm.nih.gov/pubmed/20051220
In the conclusion of this paper, the authors state that they "found no significant differences between short- and long-acting stimulant medications."
Addendum: a recent Cochrane review, published in February 2016 by Punja et al., concludes that there is a lot of evidence that amphetamines reduce core symptoms of ADHD, but cause a variety of problematic side-effects. They note that there was evidence of a lot of bias in the studies they looked at, with the quality of evidence being low to very low.
Here is a direct quote from their conclusion: "This review found no evidence that supports any one amphetamine derivative over another, and does not reveal any differences between long-acting and short-acting amphetamine preparations."
( http://siteresources.worldbank.org/INTPROSPECTS/Resources/GATChapter13.pdf ).
It is an amount of money that would pay for the salaries of
400 000 teachers, each of whom paid $50 000 per year.
A relevant article to look at about this is by Alan Schwartz, published in the New York Times in 2013:
http://www.nytimes.com/2013/12/15/health/the-selling-of-attention-deficit-disorder.html?pagewanted=all&_r=0
I am not meaning this post to be a discussion of the controversies of ADHD diagnosis. Instead, this post will focus mainly about ADHD medication. I think the rising rate of ADHD diagnosis, and the rising rate of stimulant prescription, is a very concerning trend, particularly if these diagnoses and treatments are offered without attending adequately to other biopsychosocial factors, and particularly if these treatments are being offered under the influence of un-recognized biases due to the financial power and influence of the manufacturers.
On the other hand, the rising awareness and acceptance of ADHD can allow those children, adults, and families who are dealing with ADHD-related issues to feel less stigmatized, judged, and unfairly treated. In families, knowledge and acceptance of ADHD can help child-rearing practices to be adapted, so as to avoid a harshly punitive stance towards those children with attention problems.
The newer ADHD medications are, not surprisingly, very popular, frequently prescribed, are often touted as being better than the older medications, and are listed first on medication advice guideline sheets (such as the CADDRA recommendations).
Here is a comparison of costs per day between the different ADHD drugs, looking at a typical full therapeutic dose for an adult. These cost estimates come from a site called "Pharmacy Compass" which searches for the best local prices for medications at pharmacies.
1. Newer drugs (CADDRA considers these to be the only "first line" medications):
Adderall XR 30 mg:$3.91 per day
Biphentin 80 mg:$4.36 per day
Concerta 72 mg:$5.92 per day
Vyvanse 60 mg: $5.14 per day
Strattera 100 mg: $5.51 per day
2. Older drugs (CADDRA considers these "second line"):
Dexedrine spansules 40 mg: $3.59 per day
Ritalin (methylphenidate) 60 mg: $0.81 per day
Ritalin SR 60 mg $0.66 per day
So we see that the least expensive option is methylphenidate or methylphenidate SR. Dexedrine is over 5 times as expensive. Concerta and Vyvanse are about 8 times as expensive, per day.
I mention these expense differences not necessarily in an effort to favour the cheaper medication, but rather to heighten your anticipation that there could be bias in any research results regarding these medications--especially if the research is sponsored by the manufacturers-- due to the huge profit motives involved.
It would be fair to look for studies which carefully and prospectively treat ADHD patients with Ritalin vs. one of the newer medications, in randomized comparisons.
1) Vyvanse vs. Ritalin. Almost no studies in the literature! In one study, all they looked at was whether patients stuck to a dosing regimen, in which case the Vyvanse group did "better." (http://www.ncbi.nlm.nih.gov/pubmed/23937642 ) But this measure had nothing to do with the patients actually feeling better or improving more!
A better study compared Vyvanse with Oros-MPH, a long-acting version of Ritalin (though not plain old Ritalin itself!)
[ http://www.ncbi.nlm.nih.gov/pubmed/23801529]
In this study, at first glance it certainly appears that Vyvanse is better! But looking carefully, one finds statements such as this: "At endpoint, the difference between lisdexamfetamine and OROS-MPH in the percentage of patients with an ADHD-RS-IV total score less than or equal to the mean for their age was not statistically significant." (p.747) This statement was tucked into the results section but left out of the conclusion. Looking at side-effects, we find a lower total rate of adverse effects in the Ritalin group. Reduced appetite, insomnia, and nausea were more common in the Vyvanse group. Notably, there is a long list of conflicts of interest at the end of this paper, including some of the authors being employees of the Vyvanse manufacturer, and owning stocks in the company!
In conclusion here, there is no doubt that Vyvanse is an effective medication for ADHD. The dosing regime is very convenient, which may be particularly effective and helpful for many. But it is not necessarily superior to much cheaper alternatives. For some people (including many patients I have seen), regular methylphenidate (Ritalin) allows better fine control of symptoms during the course of the day, without being "stuck" with a continuous sustained-release effect. For others, they certainly do prefer the Vyvanse. I just think that Vyvanse should not be assumed to be better, as the evidence is very weak that it is, while it is 8 times more expensive than Ritalin!
2) Concerta vs. Ritalin
http://www.ncbi.nlm.nih.gov/pubmed/11389303
This is a good early study, directly comparing the two medications, published in Pediatrics in 2001. Here is the authors' concise summary: "On virtually all measures in all settings, both drug conditions were significantly different from placebo, and the 2 drugs were not different from each other." The reason to choose Concerta over Ritalin would be convenience. The authors do point out that "compliance" is more likely on a long-acting formulation. But remember that "compliance" is a very, very indirect, and possibly irrelevant, measure of health and well-being!! Why is it important that there be better "compliance?" Should the only criteria not be well-being? Certainly this is not a reason to classify Concerta as "better" or "first line". Concerta is 9 times more expensive than Ritalin!
3) Adderall vs Ritalin
http://www.ncbi.nlm.nih.gov/pubmed/10103335
In this study, published in Pediatrics in 1999, Adderall comes out as looking better than Ritalin. But, once again, the study was sponsored by the manufacturer. On a close look, a couple of problems: first, the doses of the medications were fixed. The ritalin doses appear too low, so as not to match the equivalent doses of Adderall given. At this point, one would usually give Ritalin doses at least twice that of Adderall (i.e. 100% higher) but in this study the Ritalin dose was only 40% higher than the Adderall dose. In accordance with this under-dosing, the Adderall group not surprisingly had more side effects such as insomnia.
In conclusion, there is no doubt that Adderall XR is a good medication for ADHD. Many of my patients have preferred it over other alternatives. But it is not fair, once again, to assume that it is better. It does not deserve to be considered "first line" while a similarly-effective alternative that is one-sixth the cost is considered "second line."
4) Meta-analytic comparison:
Faraone and Glatt (2010) have published a good meta-analytic review paper, which is worth reading in detail, with particular attention to the data tables and graphs: http://www.ncbi.nlm.nih.gov/pubmed/20051220
In the conclusion of this paper, the authors state that they "found no significant differences between short- and long-acting stimulant medications."
Addendum: a recent Cochrane review, published in February 2016 by Punja et al., concludes that there is a lot of evidence that amphetamines reduce core symptoms of ADHD, but cause a variety of problematic side-effects. They note that there was evidence of a lot of bias in the studies they looked at, with the quality of evidence being low to very low.
Here is a direct quote from their conclusion: "This review found no evidence that supports any one amphetamine derivative over another, and does not reveal any differences between long-acting and short-acting amphetamine preparations."
Friday, February 19, 2016
Do Higher Doses of Antidepressants Work Better?
It is common practice in psychiatry to increase the dose of an antidepressant if the standard dose is not helping enough. Sometimes doses are increased before even finding out if the lower dose is working.
But it is interesting to consider evidence that higher doses actually do not necessarily work better:
Ruhé et al. (2009-2010) have published research on this issue, and conclude that SSRI dose increases do not improve effectiveness. Their explanation for this is quite simple: serotonin receptors are already well-occupied at standard doses, and this does not change with dose increases:
http://www.ncbi.nlm.nih.gov/pubmed/18830236
http://www.ncbi.nlm.nih.gov/pubmed/20862644
In general, it is indeed interesting to see scanty evidence that increasing antidepressant doses lead to improved effectiveness, even for treatment-resistant cases.
This issue came to my attention upon reading Lam's recent article about using light therapy to treat non-seasonal depression ( http://www.ncbi.nlm.nih.gov/pubmed/26580307). Their medication groups used only 20 mg of fluoxetine, without the possibility of increasing the dose. They cited some old, dated references to support this, such as Altamura et al (1988), and Beasley (1990):
http://www.ncbi.nlm.nih.gov/pubmed/2196623
A better, more recent article reviewing antidepressant dose vs effectiveness is by Berney (2006):
http://www.ncbi.nlm.nih.gov/pubmed/16156383
In many studies, higher doses may appear to work better, mainly because the dose was increased before the lower dose had a chance to work fully. The lower dose may well have worked just as well as the higher dose. Controlled studies comparing different doses do not support the belief that higher doses work better.
So it should not be routine practice to increase antidepressant doses beyond a standard "full dose" which is usually one tablet or capsule daily. In many cases, the different dosage regimes are likely to be equivalent. It is relevant to consider that higher doses mainly benefit the pharmaceutical companies, since they are selling more product despite the effectiveness being the same. Therefore, presentations of research data about antidepressant effectiveness may be biased in favour of higher doses. An extremely common research design in antidepressant studies is to have "flexible dosing," usually leading to the antidepressant group averaging about twice the standard dose in the end. This design, even when treatment effects are shown, biases the reader to have the specious conclusion that higher doses are better.
However, there are certainly many individual case reports of higher doses being more useful. So dose increases may have a role in some cases.
The key point is to question dose increases as a reflexive, routine management strategy for inadequate antidepressant effects. Alternative strategies include giving the lower dose a longer try, switching to something else, or using some form of augmentation.
Addendum:
Just days after posting this, I see there is a new meta-analysis by Jakubovski et al. in The American Journal of Psychiatry (173:2,pp. 174-183) which suggests that SSRI antidepressants do actually work slightly better at higher doses, peaking at 2.5 times the standard dose (e.g. 50 mg fluoxetine). They admit that the data show a trade-off between slight improved effectiveness at higher doses, but accompanied by worsened tolerability.
Yet, it is important to consider that higher doses could reflect a greater placebo effect; some of the research about active placebos show that agents which cause more side effects are likely to have a larger impact on symptoms than inert placebos. Because antidepressants at higher doses have more side effects, there would be more of this "active placebo" effect. See my previous post on this subject: http://garthkroeker.blogspot.ca/2009/03/active-placebos.html
It's hard to know what to make of this, other than to probably remain open-minded about the issue. I think that a better study design for this type of issue is to look at dose comparisons within individual clinical trials, rather than to amass data meta-analytically. Active placebo comparison groups would also be useful. For example, agents which would cause very mild side-effects could be used instead of a totally inert placebo, so as to improve the blinding of the studies. In many individual clinical trials of antidepressants (both new and old) which compare doses or dose ranges within the studies themselves, there are no significant differences in effectiveness.
Another issue, which the authors point out, is that most antidepressant studies have strict inclusion criteria which usually do not match the type of cases one would tend to see clinically most often. Many studies require a major depressive disorder diagnosis, with limited comorbidities allowed, and with limited past treatment trials, etc.
Meanwhile, it remains reasonable to give a baseline dose of antidepressants an adequate length of time to work, without reflexively increasing the dose on a routine basis. Dose increases remain an option, with some evidence-based support, but switching or augmentation could often be preferred, depending on patient preference and side-effects.
But it is interesting to consider evidence that higher doses actually do not necessarily work better:
Ruhé et al. (2009-2010) have published research on this issue, and conclude that SSRI dose increases do not improve effectiveness. Their explanation for this is quite simple: serotonin receptors are already well-occupied at standard doses, and this does not change with dose increases:
http://www.ncbi.nlm.nih.gov/pubmed/18830236
http://www.ncbi.nlm.nih.gov/pubmed/20862644
In general, it is indeed interesting to see scanty evidence that increasing antidepressant doses lead to improved effectiveness, even for treatment-resistant cases.
This issue came to my attention upon reading Lam's recent article about using light therapy to treat non-seasonal depression ( http://www.ncbi.nlm.nih.gov/pubmed/26580307). Their medication groups used only 20 mg of fluoxetine, without the possibility of increasing the dose. They cited some old, dated references to support this, such as Altamura et al (1988), and Beasley (1990):
http://www.ncbi.nlm.nih.gov/pubmed/2196623
A better, more recent article reviewing antidepressant dose vs effectiveness is by Berney (2006):
http://www.ncbi.nlm.nih.gov/pubmed/16156383
In many studies, higher doses may appear to work better, mainly because the dose was increased before the lower dose had a chance to work fully. The lower dose may well have worked just as well as the higher dose. Controlled studies comparing different doses do not support the belief that higher doses work better.
So it should not be routine practice to increase antidepressant doses beyond a standard "full dose" which is usually one tablet or capsule daily. In many cases, the different dosage regimes are likely to be equivalent. It is relevant to consider that higher doses mainly benefit the pharmaceutical companies, since they are selling more product despite the effectiveness being the same. Therefore, presentations of research data about antidepressant effectiveness may be biased in favour of higher doses. An extremely common research design in antidepressant studies is to have "flexible dosing," usually leading to the antidepressant group averaging about twice the standard dose in the end. This design, even when treatment effects are shown, biases the reader to have the specious conclusion that higher doses are better.
However, there are certainly many individual case reports of higher doses being more useful. So dose increases may have a role in some cases.
The key point is to question dose increases as a reflexive, routine management strategy for inadequate antidepressant effects. Alternative strategies include giving the lower dose a longer try, switching to something else, or using some form of augmentation.
Addendum:
Just days after posting this, I see there is a new meta-analysis by Jakubovski et al. in The American Journal of Psychiatry (173:2,pp. 174-183) which suggests that SSRI antidepressants do actually work slightly better at higher doses, peaking at 2.5 times the standard dose (e.g. 50 mg fluoxetine). They admit that the data show a trade-off between slight improved effectiveness at higher doses, but accompanied by worsened tolerability.
Yet, it is important to consider that higher doses could reflect a greater placebo effect; some of the research about active placebos show that agents which cause more side effects are likely to have a larger impact on symptoms than inert placebos. Because antidepressants at higher doses have more side effects, there would be more of this "active placebo" effect. See my previous post on this subject: http://garthkroeker.blogspot.ca/2009/03/active-placebos.html
It's hard to know what to make of this, other than to probably remain open-minded about the issue. I think that a better study design for this type of issue is to look at dose comparisons within individual clinical trials, rather than to amass data meta-analytically. Active placebo comparison groups would also be useful. For example, agents which would cause very mild side-effects could be used instead of a totally inert placebo, so as to improve the blinding of the studies. In many individual clinical trials of antidepressants (both new and old) which compare doses or dose ranges within the studies themselves, there are no significant differences in effectiveness.
Another issue, which the authors point out, is that most antidepressant studies have strict inclusion criteria which usually do not match the type of cases one would tend to see clinically most often. Many studies require a major depressive disorder diagnosis, with limited comorbidities allowed, and with limited past treatment trials, etc.
Meanwhile, it remains reasonable to give a baseline dose of antidepressants an adequate length of time to work, without reflexively increasing the dose on a routine basis. Dose increases remain an option, with some evidence-based support, but switching or augmentation could often be preferred, depending on patient preference and side-effects.
Wednesday, December 9, 2015
Cochrane Review: ADHD medications have lower-quality evidence than most people believe
A new Cochrane review, published on November 25, 2015 by authors Storebø and Zwi, looked at the use of stimulants to treat childhood ADHD (specifically, methylphenidate). Their conclusions included:
1. "the low quality of the underpinning evidence means that we cannot be certain of the magnitude of the effects."
2. "the general perception of methylphenidate as an effective drug for all children with ADHD seems out of step with the new evidence."
The authors found a great deal of industry sponsorship in existing studies, and found that "all 185 trials" had a high risk of bias. I would add that more recent ADHD studies, involving newer, more expensive medications, are most likely at an even higher risk of similar biases.
In general, the weaknesses of existing data are similar to the weaknesses in much other psychiatric research: studies are usually brief, rather than long-term, despite treatments often being given for many years or permanently.
Other authors, such as Hinshaw (2015)[http://www.ncbi.nlm.nih.gov/pubmed/26262927] have reached similar conclusions, including
"the diminution of medication's initial superiority [was apparent] once the randomly assigned treatment phase turned into naturalistic follow-up. The key paradox is that while ADHD clearly responds to medication and behavioral treatment in the short term, evidence for long-term effectiveness remains elusive"
How should this information inform our understanding or management of ADHD?
First, I do not think it is necessary to stop using stimulants as a treatment. However, I do think it is necessary to step away from the assumption that long-term stimulant use is appropriate for every person with ADHD symptoms. Other ways of using stimulant medication could often be more appropriate for many, such as using stimulants sporadically, to manage attentional symptoms for brief periods of time.
The evidence also does not strongly support the long-term effectiveness of behavioural therapies. This, too, is not really surprising to me.
I think that the answer lies in moving away from a highly medicalized, reductionistic approach entirely. Phenomena such as ADHD have broad biopsychosocial underpinnings: some factors exist within the individual, while many others exist in family, social, and educational structures. In some ways this is similar to other public health issues, such as obesity or addictions: a single medication or behavioural treatment is very unlikely to be a remarkably effective strategy to help with these problems. Yet, each of these strategies has a role, provided that the role is not overvalued by those offering it. Other larger social factors are extremely important as well, including factors relating to poverty, economic equality, community supports, provision of justice & public safety, etc.
So in conclusion, I see -- not surprisingly -- that we must not have exaggerated expectations of medication for treating ADHD or any other psychiatric phenomena. I do think stimulants have an important role, however, for many people, provided that the expectations are modest, and provided that side effect risks are not discounted by an over-enthusiastic prescriber with biased beliefs about long-term effectiveness vs. risk.
It is also important not to be biased against any particular treatment. In some cases, for example, balanced medication treatment of ADHD could reduce various types of risks, including substance use problems and traffic accidents, etc. It is just that the magnitude of such protective effects are likely to be exaggerated in most practioners' minds, due to the biases described above.
As with other life issues, I believe it is necessary to have a very broad view about helping strategies, which includes other types of therapeutic support if desired, as well as attention given to community, educational, cultural, and family resources--not in isolation, but in a comprehensive and holistic way.
1. "the low quality of the underpinning evidence means that we cannot be certain of the magnitude of the effects."
2. "the general perception of methylphenidate as an effective drug for all children with ADHD seems out of step with the new evidence."
The authors found a great deal of industry sponsorship in existing studies, and found that "all 185 trials" had a high risk of bias. I would add that more recent ADHD studies, involving newer, more expensive medications, are most likely at an even higher risk of similar biases.
In general, the weaknesses of existing data are similar to the weaknesses in much other psychiatric research: studies are usually brief, rather than long-term, despite treatments often being given for many years or permanently.
Other authors, such as Hinshaw (2015)[http://www.ncbi.nlm.nih.gov/pubmed/26262927] have reached similar conclusions, including
"the diminution of medication's initial superiority [was apparent] once the randomly assigned treatment phase turned into naturalistic follow-up. The key paradox is that while ADHD clearly responds to medication and behavioral treatment in the short term, evidence for long-term effectiveness remains elusive"
How should this information inform our understanding or management of ADHD?
First, I do not think it is necessary to stop using stimulants as a treatment. However, I do think it is necessary to step away from the assumption that long-term stimulant use is appropriate for every person with ADHD symptoms. Other ways of using stimulant medication could often be more appropriate for many, such as using stimulants sporadically, to manage attentional symptoms for brief periods of time.
The evidence also does not strongly support the long-term effectiveness of behavioural therapies. This, too, is not really surprising to me.
I think that the answer lies in moving away from a highly medicalized, reductionistic approach entirely. Phenomena such as ADHD have broad biopsychosocial underpinnings: some factors exist within the individual, while many others exist in family, social, and educational structures. In some ways this is similar to other public health issues, such as obesity or addictions: a single medication or behavioural treatment is very unlikely to be a remarkably effective strategy to help with these problems. Yet, each of these strategies has a role, provided that the role is not overvalued by those offering it. Other larger social factors are extremely important as well, including factors relating to poverty, economic equality, community supports, provision of justice & public safety, etc.
So in conclusion, I see -- not surprisingly -- that we must not have exaggerated expectations of medication for treating ADHD or any other psychiatric phenomena. I do think stimulants have an important role, however, for many people, provided that the expectations are modest, and provided that side effect risks are not discounted by an over-enthusiastic prescriber with biased beliefs about long-term effectiveness vs. risk.
It is also important not to be biased against any particular treatment. In some cases, for example, balanced medication treatment of ADHD could reduce various types of risks, including substance use problems and traffic accidents, etc. It is just that the magnitude of such protective effects are likely to be exaggerated in most practioners' minds, due to the biases described above.
As with other life issues, I believe it is necessary to have a very broad view about helping strategies, which includes other types of therapeutic support if desired, as well as attention given to community, educational, cultural, and family resources--not in isolation, but in a comprehensive and holistic way.
Monday, November 9, 2015
Duloxetine (Cymbalta)
Duloxetine (Cymbalta) is another newer antidepressant, approved in the US in 2004, and in Canada in 2007. It is a reuptake inhibitor of both serotonin and norepinephrine, and is most similar in this regard to venlafaxine (Effexor).
In a study of medication treatment options for severe depression, a switch to duloxetine was compared with a dose increase of escitalopram. The escitalopram group had better results, including a remission rate of 54% for escitalopram vs. 42% for duloxetine. ( http://www.ncbi.nlm.nih.gov/pubmed/22559255 )
Another similar comparative study also favoured escitalopram 10-20 mg daily over duloxetine 60 mg, both in terms of effectiveness and side effect profile. In this study 2% of the escitalopram group dropped out due to side effects, compared to 13% of the duloxetine group.
( http://www.ncbi.nlm.nih.gov/pubmed/17563128 )
In this well-done 2011 review by Schueler et al. comparing venlafaxine and duloxetine with SSRIs. They concluded the following:
1) Venlafaxine had superior efficacy in response rates but inferior tolerability to SSRIs
2) Duloxetine did not show any advantages over other antidepressants and was less well tolerated than SSRIs and venlafaxine.
( http://www.ncbi.nlm.nih.gov/pubmed/20831742 )
Another study, done in 2006, also showing evidence that venlafaxine is superior to duloxetine: http://www.ncbi.nlm.nih.gov/pubmed/16867188
In one of the few well-designed comparative studies of venlafaxine vs. duloxetine, done by Perahia et al (2008), the two medications are found to have similar effectiveness, but with a higher dropout rate due to side effects in the duloxetine group. ( http://www.ncbi.nlm.nih.gov/pubmed/17445831 ) A look at graphs of symptom change show that the two medications appear identically effective. But duloxetine caused more side effects, especially nausea. It is true that discontinuing venlafaxine causes more side effects than discontinuing duloxetine, but this could be framed as a technical matter that just needs to be managed by very slow tapering.
This 2012 study (sponsored by the manufacturer!) by Martinez et al ( http://www.ncbi.nlm.nih.gov/pubmed/22027844 ) compares duloxetine with SSRI treatments for major depression, in a 12 week prospective trial. Duloxetine performed well, but on the primary outcome measure there was no significant difference in response or remission rates. On secondary measures there appeared to be some advantages for duloxetine, particularly for pain symptoms. But the study was not intended to be for treating pain syndromes! SSRIs are known to be ineffective for pain!
Duloxetine is often touted as a good treatment for neuropathic pain. And numerous studies do show that it can help. But how does it actually compare to other options? Specifically, how does it compare with a much cheaper and similar antidepressant, venlafaxine? Rudroju et al (2013) looked at comparative effectiveness of various medications for treating neuropathic pain. Many medications helped, including duloxetine. But in this study, gabapentin and venlafaxine had the best odds ratio of helping, followed by pregabalin. Duloxetine was farther down the list. With a benefit-risk analysis, which takes into account side effects and tolerability, gabapentin, pregabalin, and venlafaxine were once again at the top of the list of best agents, with duloxetine farther down.
( http://www.ncbi.nlm.nih.gov/pubmed/24284851)
Duloxetine (Cymbalta) costs about $4.23 for a 60 mg dose, compared to $0.38 for an equivalent 150 mg dose of Effexor XR. So it is about 10 times more expensive than an alternative which is shown to work as well if not better.
In conclusion, Cymbalta is yet another newer antidepressant which is not necessarily better than alternatives; in fact the alternatives such as Effexor XR are probably equally effective or more effective. It is marketed intensely as a treatment for neuropathic and other pain syndromes, but alternatives such as Effexor XR work better, with fewer side effects, at a lower cost. Therefore, just as with the other antidepressants mentioned in the previous posts, Cymbalta could be considered a third-line option, which might suit some people well if they have tried other things unsuccessfully.
In a study of medication treatment options for severe depression, a switch to duloxetine was compared with a dose increase of escitalopram. The escitalopram group had better results, including a remission rate of 54% for escitalopram vs. 42% for duloxetine. ( http://www.ncbi.nlm.nih.gov/pubmed/22559255 )
Another similar comparative study also favoured escitalopram 10-20 mg daily over duloxetine 60 mg, both in terms of effectiveness and side effect profile. In this study 2% of the escitalopram group dropped out due to side effects, compared to 13% of the duloxetine group.
( http://www.ncbi.nlm.nih.gov/pubmed/17563128 )
In this well-done 2011 review by Schueler et al. comparing venlafaxine and duloxetine with SSRIs. They concluded the following:
1) Venlafaxine had superior efficacy in response rates but inferior tolerability to SSRIs
2) Duloxetine did not show any advantages over other antidepressants and was less well tolerated than SSRIs and venlafaxine.
( http://www.ncbi.nlm.nih.gov/pubmed/20831742 )
Another study, done in 2006, also showing evidence that venlafaxine is superior to duloxetine: http://www.ncbi.nlm.nih.gov/pubmed/16867188
In one of the few well-designed comparative studies of venlafaxine vs. duloxetine, done by Perahia et al (2008), the two medications are found to have similar effectiveness, but with a higher dropout rate due to side effects in the duloxetine group. ( http://www.ncbi.nlm.nih.gov/pubmed/17445831 ) A look at graphs of symptom change show that the two medications appear identically effective. But duloxetine caused more side effects, especially nausea. It is true that discontinuing venlafaxine causes more side effects than discontinuing duloxetine, but this could be framed as a technical matter that just needs to be managed by very slow tapering.
This 2012 study (sponsored by the manufacturer!) by Martinez et al ( http://www.ncbi.nlm.nih.gov/pubmed/22027844 ) compares duloxetine with SSRI treatments for major depression, in a 12 week prospective trial. Duloxetine performed well, but on the primary outcome measure there was no significant difference in response or remission rates. On secondary measures there appeared to be some advantages for duloxetine, particularly for pain symptoms. But the study was not intended to be for treating pain syndromes! SSRIs are known to be ineffective for pain!
Duloxetine is often touted as a good treatment for neuropathic pain. And numerous studies do show that it can help. But how does it actually compare to other options? Specifically, how does it compare with a much cheaper and similar antidepressant, venlafaxine? Rudroju et al (2013) looked at comparative effectiveness of various medications for treating neuropathic pain. Many medications helped, including duloxetine. But in this study, gabapentin and venlafaxine had the best odds ratio of helping, followed by pregabalin. Duloxetine was farther down the list. With a benefit-risk analysis, which takes into account side effects and tolerability, gabapentin, pregabalin, and venlafaxine were once again at the top of the list of best agents, with duloxetine farther down.
( http://www.ncbi.nlm.nih.gov/pubmed/24284851)
Duloxetine (Cymbalta) costs about $4.23 for a 60 mg dose, compared to $0.38 for an equivalent 150 mg dose of Effexor XR. So it is about 10 times more expensive than an alternative which is shown to work as well if not better.
In conclusion, Cymbalta is yet another newer antidepressant which is not necessarily better than alternatives; in fact the alternatives such as Effexor XR are probably equally effective or more effective. It is marketed intensely as a treatment for neuropathic and other pain syndromes, but alternatives such as Effexor XR work better, with fewer side effects, at a lower cost. Therefore, just as with the other antidepressants mentioned in the previous posts, Cymbalta could be considered a third-line option, which might suit some people well if they have tried other things unsuccessfully.
Vortioxetine
Vortioxetine is one of the newest antidepressants on the market, released in the U.S. in 2013. It has serotonin and norepinephrine reuptake inhibition effects, plus a variety of direct effects on serotonin receptors.
This is a negative study of vortioxetine, showing that it did not lead to any difference in rating scores compared to placebo, when used at doses of 10 mg or 15 mg daily, to treat depression for 8 weeks:
http://www.ncbi.nlm.nih.gov/pubmed/26035186
In another study, by Jacobson et al (2015), looking at doses of 10 mg or 20 mg daily, they found slight improvements in the vortioxetine groups compared to placebo, with "significant" differences in the MADRS score only for the 20 mg dose ( http://www.ncbi.nlm.nih.gov/pubmed/26035185 ). If you look at the symptom changes vs. placebo on a graph, the clinical relevance of the vortioxetine effect appears questionable. Yet, typically with papers of this type, despite the results being very unimpressive, the authors try to frame it in a very positive way, as though they had discovered a fantastically effective new treatment. Vortioxetine is supposed to be helpful for managing sexual side effects as well, but the measures of this done in the study once again do not show a spectacular benefit. For those who did not have sexual side effects previously, about half in the vortioxetine group developed sexual side effects, at a rate 10-20% greater than placebo. Here are the authors' final assertions at the end of their paper: "In conclusion, vortioxetine 20 mg significantly reduced MADRS total score at 8 weeks in adults with MDD. Overall, vortioxetine was well tolerated in this study." Perhaps a more fair conclusion could be "vortioxetine produced small differences compared to placebo in the MADRS score, but only at a dose of 20 mg daily. The degree of improvement does not compare favourably with similar studies using other antidepressants. Rates of side effects, including sexual side effects, were higher in the vortioxetine groups compared to the placebo groups."
A 2015 meta-analytic review paper by Rosenblat et al (http://www.ncbi.nlm.nih.gov/pubmed/26209859 ) showed in general that antidepressants appear to help with cognitive function when used to treat depression. But they conclude that "no statistically significant difference in cognitive effects was found when pooling results from head-to-head trials of SSRIs, SNRIs, TCAs, and NDRIs."
In this article by Llorca et al (2014), which is a "meta-regression analysis", it appears to favour vortioxetine as being better than other antidepressants. (https://www.ncbi.nlm.nih.gov/pubmed/25249164)This article is then quoted elsewhere, such as on Wikipedia, as supporting the claim that vortioxetine is a superior antidepressant. But the article shows indirect information only, there is no actual comparative study referred to at all. And the findings, even from this study, really only show that vortioxetine is in the "same ballpark" in terms of effects, compared to other agents-- it certainly doesn't show superiority.
It was hoped that vortioxetine might help with generalized anxiety, but after several negative studies (https://www.ncbi.nlm.nih.gov/pubmed/24424707,
https://www.ncbi.nlm.nih.gov/pubmed/24341301 ), the latter of which showing that it was significantly inferior to another antidepressant (duloxetine), it is no longer claimed by anyone that it is an appropriate treatment for GAD.
Vortioxetine costs about $3.25 for a 20 mg dose. This is about 10 times more than a 20 mg dose of citalopram.
In conclusion, vortioxetine is another new option for treating depression. It could be something to think about for treating anxious depression. But there is no evidence that it is superior to other options, and is probably inferior in many cases. There is no evidence of any specific benefit for treating anxiety disorders such as GAD. I would consider it to be a third-line alternative at this point.
This is a negative study of vortioxetine, showing that it did not lead to any difference in rating scores compared to placebo, when used at doses of 10 mg or 15 mg daily, to treat depression for 8 weeks:
http://www.ncbi.nlm.nih.gov/pubmed/26035186
In another study, by Jacobson et al (2015), looking at doses of 10 mg or 20 mg daily, they found slight improvements in the vortioxetine groups compared to placebo, with "significant" differences in the MADRS score only for the 20 mg dose ( http://www.ncbi.nlm.nih.gov/pubmed/26035185 ). If you look at the symptom changes vs. placebo on a graph, the clinical relevance of the vortioxetine effect appears questionable. Yet, typically with papers of this type, despite the results being very unimpressive, the authors try to frame it in a very positive way, as though they had discovered a fantastically effective new treatment. Vortioxetine is supposed to be helpful for managing sexual side effects as well, but the measures of this done in the study once again do not show a spectacular benefit. For those who did not have sexual side effects previously, about half in the vortioxetine group developed sexual side effects, at a rate 10-20% greater than placebo. Here are the authors' final assertions at the end of their paper: "In conclusion, vortioxetine 20 mg significantly reduced MADRS total score at 8 weeks in adults with MDD. Overall, vortioxetine was well tolerated in this study." Perhaps a more fair conclusion could be "vortioxetine produced small differences compared to placebo in the MADRS score, but only at a dose of 20 mg daily. The degree of improvement does not compare favourably with similar studies using other antidepressants. Rates of side effects, including sexual side effects, were higher in the vortioxetine groups compared to the placebo groups."
A 2015 meta-analytic review paper by Rosenblat et al (http://www.ncbi.nlm.nih.gov/pubmed/26209859 ) showed in general that antidepressants appear to help with cognitive function when used to treat depression. But they conclude that "no statistically significant difference in cognitive effects was found when pooling results from head-to-head trials of SSRIs, SNRIs, TCAs, and NDRIs."
In this article by Llorca et al (2014), which is a "meta-regression analysis", it appears to favour vortioxetine as being better than other antidepressants. (https://www.ncbi.nlm.nih.gov/pubmed/25249164)This article is then quoted elsewhere, such as on Wikipedia, as supporting the claim that vortioxetine is a superior antidepressant. But the article shows indirect information only, there is no actual comparative study referred to at all. And the findings, even from this study, really only show that vortioxetine is in the "same ballpark" in terms of effects, compared to other agents-- it certainly doesn't show superiority.
It was hoped that vortioxetine might help with generalized anxiety, but after several negative studies (https://www.ncbi.nlm.nih.gov/pubmed/24424707,
https://www.ncbi.nlm.nih.gov/pubmed/24341301 ), the latter of which showing that it was significantly inferior to another antidepressant (duloxetine), it is no longer claimed by anyone that it is an appropriate treatment for GAD.
Vortioxetine costs about $3.25 for a 20 mg dose. This is about 10 times more than a 20 mg dose of citalopram.
In conclusion, vortioxetine is another new option for treating depression. It could be something to think about for treating anxious depression. But there is no evidence that it is superior to other options, and is probably inferior in many cases. There is no evidence of any specific benefit for treating anxiety disorders such as GAD. I would consider it to be a third-line alternative at this point.
Sunday, November 8, 2015
Desvenlafaxine (Pristiq)
Desvenlafaxine (Pristiq) is an antidepressant that has been available since 2008-2009. It is another example, similar to escitalopram, of a new drug being marketed which is simply a chemical "tweak" of another very similar drug. Pristiq is an active metabolite of another common antidepressant, venlafaxine (Effexor). Effexor had been on the market since 1993.
Being new, many studies were done, usually comparing it with placebo, showing that it works. Yet, very few studies were done comparing it with other antidepressants.
Laoutidis and Kioulos (2015) have recently published a review and meta-analysis of desvenlafaxine. http://www.ncbi.nlm.nih.gov/pubmed/26205685
They found that while it clearly works better than placebo in short-term trials, it is significantly inferior to other agents in comparative studies (i.e. those studies in which desvenlafaxine is compared with a different antidepressant prospectively).
In a 2014 study by Maity et al., desvenlafaxine and escitalopram were found to be equally effective (actually with a non-statistically-significant" edge favouring escitalopram) for anxious depression. But in this study, it caused more side-effects than escitalopram. http://www.ncbi.nlm.nih.gov/pubmed/25097285
Soares et al (2010) similarly showed no advantage to using Pristiq instead of Cipralex for treating depression in post-menopausal women. Once again, Cipralex had a non-statistically-significant advantage in effectiveness over Pristiq. http://www.ncbi.nlm.nih.gov/pubmed/20539246
(to be clear about what I mean by "non-significant," it is important to know that all statistical findings are probability statements. "Significant" usually refers to a finding which has a less than 5% chance of being due to random variation alone. For many findings, one measure might exceed another, but with a higher than 5% likelihood of the difference being due to chance. It should be considered, though, that from a Bayesian point of view, if you have results which differ, even at a so-called "non-significant" level of confidence, this finding still increases the likelihood somewhat that there is a significant difference. For example, if we toss a coin 10 times, and find that we get 7 heads (instead of the expected 5), we know that there is 17% chance of getting 7 or more out of 10 heads from a fairly balanced coin. Thus this would be "non-significant" with respect to showing that the coin was not fairly balanced. But even so, if one indeed did see 7 heads in 10 tosses, it should increase one's suspicion (in a quantifiable way) that the coin is actually imbalanced. Thus, one should not entirely dismiss "non-significant" results, they should optimally be considered in a large fund of data about an issue, each part of which should reasonably sway our judgment slightly)
In this interesting study by Liebowitz et al (2013)
( http://www.ncbi.nlm.nih.gov/pubmed/23517291 ), Pristiq was offered at two different doses (10 mg and 50 mg), compared with placebo, for treating depression. Both doses were superior to placebo, but were equally effective to each other! Yet, the "recommended dose" is 50 mg. Pristiq is only available in 50 mg and 100 mg tablets!
Pristiq costs about $3.00 per 50 mg pill. A similar drug, Effexor XR, costs $0.75 for a similar dose. Celexa at an equivalent dose costs $0.27, according to Pharmacy Compass (http://www.pharmacycompass.ca/).
So I do not see any reason to recommend Pristiq, except as one of a list of alternatives to try after other options have been tried. There is no reason to expect that it would work better than any other antidepressant, unless a particular person just happens to prefer it (as is sometimes the case). There is evidence to suggest that it has more side effects than alternatives. I do not necessarily think it is a bad drug though: I'm sure that there are some who might try it, and find it very helpful after exploring other options. But based on current evidence it should not be included as a first-line agent.
Being new, many studies were done, usually comparing it with placebo, showing that it works. Yet, very few studies were done comparing it with other antidepressants.
Laoutidis and Kioulos (2015) have recently published a review and meta-analysis of desvenlafaxine. http://www.ncbi.nlm.nih.gov/pubmed/26205685
They found that while it clearly works better than placebo in short-term trials, it is significantly inferior to other agents in comparative studies (i.e. those studies in which desvenlafaxine is compared with a different antidepressant prospectively).
In a 2014 study by Maity et al., desvenlafaxine and escitalopram were found to be equally effective (actually with a non-statistically-significant" edge favouring escitalopram) for anxious depression. But in this study, it caused more side-effects than escitalopram. http://www.ncbi.nlm.nih.gov/pubmed/25097285
Soares et al (2010) similarly showed no advantage to using Pristiq instead of Cipralex for treating depression in post-menopausal women. Once again, Cipralex had a non-statistically-significant advantage in effectiveness over Pristiq. http://www.ncbi.nlm.nih.gov/pubmed/20539246
(to be clear about what I mean by "non-significant," it is important to know that all statistical findings are probability statements. "Significant" usually refers to a finding which has a less than 5% chance of being due to random variation alone. For many findings, one measure might exceed another, but with a higher than 5% likelihood of the difference being due to chance. It should be considered, though, that from a Bayesian point of view, if you have results which differ, even at a so-called "non-significant" level of confidence, this finding still increases the likelihood somewhat that there is a significant difference. For example, if we toss a coin 10 times, and find that we get 7 heads (instead of the expected 5), we know that there is 17% chance of getting 7 or more out of 10 heads from a fairly balanced coin. Thus this would be "non-significant" with respect to showing that the coin was not fairly balanced. But even so, if one indeed did see 7 heads in 10 tosses, it should increase one's suspicion (in a quantifiable way) that the coin is actually imbalanced. Thus, one should not entirely dismiss "non-significant" results, they should optimally be considered in a large fund of data about an issue, each part of which should reasonably sway our judgment slightly)
In this interesting study by Liebowitz et al (2013)
( http://www.ncbi.nlm.nih.gov/pubmed/23517291 ), Pristiq was offered at two different doses (10 mg and 50 mg), compared with placebo, for treating depression. Both doses were superior to placebo, but were equally effective to each other! Yet, the "recommended dose" is 50 mg. Pristiq is only available in 50 mg and 100 mg tablets!
Pristiq costs about $3.00 per 50 mg pill. A similar drug, Effexor XR, costs $0.75 for a similar dose. Celexa at an equivalent dose costs $0.27, according to Pharmacy Compass (http://www.pharmacycompass.ca/).
So I do not see any reason to recommend Pristiq, except as one of a list of alternatives to try after other options have been tried. There is no reason to expect that it would work better than any other antidepressant, unless a particular person just happens to prefer it (as is sometimes the case). There is evidence to suggest that it has more side effects than alternatives. I do not necessarily think it is a bad drug though: I'm sure that there are some who might try it, and find it very helpful after exploring other options. But based on current evidence it should not be included as a first-line agent.
Escitalopam vs. Citalopram (Cipralex or Lexapro vs. Celexa)
It is interesting how professional opinion can be swayed by trends in practice. Escitalopram (Cipralex, or Lexapro) is a newer antidepressant than citalopram (Celexa). In fact, citalopram itself is a mixture of "enantiomers," which are molecules that are identical to each other except for being mirror-images of each other geometrically. In many chemical processes, different enantiomers are formed in fairly equal amounts, as a mixture. But escitalopram, unlike citalopram, consists of just one of these entantiomers, rather than being a mixture. Citalopram is literally a mixture of escitalopram with an inactive enantiomer. Therefore, you literally are taking escitalopram when you are taking citalopram. You are also taking the inactive enantiomer of escitalopram.
Here we have it again, that escitalopram has more recently been on patent, while citalopram has been available in a generic form for a longer time. Of course, there would be many more industry-sponsored research studies done recently on escitalopram.
There's no doubt about it, that escitalopram can be a good antidepressant. But many professionals (including in one formal instructive report I recently read), assert that escitalopram is clearly "better" than citalopram.
I think this belief is mainly due to cognitive biases. There has been much more marketing favouring escitalopram in the past decade. The trends in practice among psychiatrists tend to favour the personal belief that "escitalopram is better." Because it is used more often these days than citalopram, any positive report about escitalopram is likely to be more salient. Also, with recurrent trials of antidepressants, any switch to almost any new agent has a reasonable probability of leading to some improvement, irrespective of the properties of the new agent. For many people, a given antidepressant does not work well enough. In this cohort, it is much more likely that a given person would have tried citalopram at some point in the past, and would now be looking at trying escitalopram. There might be about a 30% chance of the escitalopram helping in this scenario. For the thousands of people in this group, there would then be hundreds who would have the experience of escitalopram appearing to work better than citalopram. This feeds the notion that escitalopram is in fact a better antidepressant.
The bias here is that very few people in this cohort would have tried escitalopram first, then tried citalopram later on. This is because escitalopram is newer, more highly marketed, and is more likely to be used when other antidepressants have not worked. But the prevailing evidence is that most any new antidepressant (or other therapy) trial has a similar chance of helping, when a previous trial has not helped. Therefore, I predict that there would be an equal likelihood of citalopram working when escitalopram failed, compared to escitalopram working when citalopram failed. It is possible that the only reason escitalopram appears to work more commonly is that it is simply used more often!
Some of my patients, over the years, have tried both of these medications. Some have ended up preferring escitalopram. Others have ended up preferring citalopram. For most, there has been no difference, either in side effects or effectiveness.
Are there any recent research studies which compare the two? One recent study, by Li et al (2014), reviews and pools results from 3 previous clinical studies. They conclude that there is no difference in response or remission rates between escitalopram and citalopram:
http://www.ncbi.nlm.nih.gov/pubmed/25401715
It is interesting to look at the data from previous studies, including a Cochrane review done in 2012, which conclude that escitalopram is better than citalopram: http://www.ncbi.nlm.nih.gov/pubmed/22786497 The authors slip in the caution that "As with most systematic reviews in psychopharmacology, the potential for overestimation of treatment effect due to sponsorship bias and publication bias should be borne in mind when interpreting review findings." Yet the reader of this article is left with the impression that escitalopram is much better than citalopram.
I note that escitalopram is about 30% more expensive than an equivalent dose of citalopram, according to PharmacyCompass, a Canadian service which helps people find the best local prices for medications at local pharmacies.
In conclusion, I think that with respect to antidepressant choice, there is no doubt that escitalopram is appropriate and works at least as well as other available medications. But it is not necessarily true that escitalopram is "better." The problem with this biased view of "betterness" is that it could cause a person (a psychiatrist or patient) to overlook other options, and favour escitalopram as a first choice automatically, and unnecessarily. It could also cause many to overlook citalopram as a possibility for someone who has unsuccessfully tried escitalopram in the past.
Thursday, October 8, 2009
Is Seroquel XR better than generic quetiapine?
A supplement written by Christoph Correll for The Canadian Journal of Diagnosis (September 2009) was delivered--free--into my office mailbox the other day.
It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.
First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.
So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).
A much more meaningful chart would show one of the following:
1) the receptor blockades for each drug when the drug is administered at typical doses
2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).
The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.
Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:
1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).
2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.
I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.
Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!
Regular quetiapine has gone generic.
Quetiapine extended-release (Seroquel XR) has not.
I am bothered by the possibility of bias in Correll's article.
It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.
In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.
Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.
Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.
BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.
One individual might respond better to one drug, compared to another.
The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.
(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )
It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.
First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.
So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).
A much more meaningful chart would show one of the following:
1) the receptor blockades for each drug when the drug is administered at typical doses
2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).
The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.
Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:
1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).
2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.
I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.
Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!
Regular quetiapine has gone generic.
Quetiapine extended-release (Seroquel XR) has not.
I am bothered by the possibility of bias in Correll's article.
It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.
In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.
Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.
Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.
BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.
One individual might respond better to one drug, compared to another.
The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.
(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )
Monday, October 5, 2009
The need for CME
Here's another article from "the last psychiatrist" on CME:
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more
Another insightful article, but pretty cynical!
But here are some of my opinions on this one:
1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.
I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more
Another insightful article, but pretty cynical!
But here are some of my opinions on this one:
1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.
I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.
Friday, May 1, 2009
My Experiences with Industry Sponsorship
Around 2001, when I was a mood disorders fellow, I was asked to do an educational lecture by Organon, the manufacturer of the antidepressant mirtazapine. The company clearly wanted one of the more prominent mood disorders research psychiatrists to do the lecture--but since no one else was available, they settled for me. It was common practice for research psychiatrists or other perceived "leaders in the field" to be paid by drug companies for "educational lectures" attended by family physicians or other psychiatrists, usually at expensive restaurants or lavishly-catered hotel conference rooms (the drug company footing the bill, of course); I think this common practice remains. To be fair, I think everyone assumed that this was all fine, even a useful educational service. Probably many of those involved in this practice still believe that. And perhaps many of these lectures are useful educational services to some degree, it's just that both the lecturers and the recipients may be unaware of the biases involved. Anyway, my lecture was supposed to be about treating resistant depression. I was provided by the company rep with numerous powerpoint slides about mirtazapine to include in my lecture. I did the lecture, and was paid generously for it. I included a few of the slides about mirtazapine, but I truly tried to give a lecture broadly about treating resistant depression, and discussed mirtazapine for only about 20% of the talk. Clearly the company rep was not impressed with my performance, and I was never again asked to do a lecture for them. I'm glad of that, since the more one does these things, the more one can be convinced that it is professionally appropriate, despite the obvious biases involved.
Around 2000-2001 I was involved in a clinical study of a new drug. The drug company sponsored the study, flew everyone business-class to Monaco (on the French Riviera), and put us up in a lavish 5-star hotel, to attend an introductory meeting regarding the study. Such meetings, in my opinion, are utterly needless expenses. Introductions and instructions about a study can be done without transcontinental travel. Training for rating scales, etc., could be done in some other simple, standardized way, without any need for travel. I did enjoy the trip, and I wouldn't doubt that it contributed to my having a more favourable view of that company's products in the following years.
Also around 2000-2001 I was involved in another clinical study. The drug company, also sponsoring the study, flew everyone business-class to Miami, Florida, and put us up in a famous 5-star hotel. By this time I was starting to have more questions about the neutrality of the research, under these circumstances. Something that struck me during that trip was my observations of the company reps meticulously preparing their video presentation for us -- they were preparing a show; it was basically a slick info-mercial, sound-effects and all. I was also struck by the fact that no one around me seemed to notice this or have a critical view of it. I felt like, on the one hand, we were being treated like royalty, but on the other hand we were simply being bought. I realize that it is good for companies to make participation in research projects attractive to everyone involved. It can be frustrating work to recruit patients for clinical studies, and many psychiatrists would rather not take time away from other aspects of work to participate in research. Research is important, and maybe travel & adventure could be fair aspects to enjoying the life of a researcher. BUT -- the travel is really not necessary at all. It is an extravagance. Information and training about a research protocol can happen locally. Other communication can happen over the phone, over the net, or over a video link. The other expensive extravagances just reduce the neutrality of the study, and also bias all participants (many of whom are "leaders in the field" who often influence other practitioners) to have and convey a more favourable view of the company's products, irrespective of the results of the particular study.
I think it would be interesting to have disclosures in research papers not only about the authors' affiliations with, or income received from, the drug companies, but also about the travel expenses paid by the companies for meetings pertaining to the study in question.
A more mundane aspect of industry sponsorship, during my residency between 1995-2000, was the weekly phenomenon of the "drug lunch." Basically, during almost every group meeting or rounds, food would be provided by a drug rep--usually quite a tasty lunch.
A continuing aspect of industry sponsorship is the distribution of free samples. At times I find this quite useful, to help someone get started on something right away, without the time or expense of a pharmacy visit. At other times, people have not been able to afford medication (the most common psychiatric medications are available for free in BC, through a government plan, but many more exotic medications are not covered by this plan): in some cases, the drug companies have provided a free "compassionate release" supply of medication for extended periods of time. Yet, I recognize that these phenomena lead to bias. The presence of a particular sample can influence the choice of which particular medication to recommend, particularly when the different choices are all similarly effective.
I realize this post may come off sounding like some kind of anti-corporate rant. I don't want to slam corporations too much though -- thanks to large companies, we have many more treatments which can profoundly improve quality of life, and which can save many lives. Profit-oriented motivations can drive productivity, competition, and better research. It's just that we can't be swept into the current of advertising and other biased persuasive tactics which companies use to sell more of their products. We can sympathize with the reality that companies behave this way, but as health care professionals, or as individuals contemplating whether or not to take a particular medication or other treatment, we need to have information which is clear, unbiased, as objective as possible.
Around 2000-2001 I was involved in a clinical study of a new drug. The drug company sponsored the study, flew everyone business-class to Monaco (on the French Riviera), and put us up in a lavish 5-star hotel, to attend an introductory meeting regarding the study. Such meetings, in my opinion, are utterly needless expenses. Introductions and instructions about a study can be done without transcontinental travel. Training for rating scales, etc., could be done in some other simple, standardized way, without any need for travel. I did enjoy the trip, and I wouldn't doubt that it contributed to my having a more favourable view of that company's products in the following years.
Also around 2000-2001 I was involved in another clinical study. The drug company, also sponsoring the study, flew everyone business-class to Miami, Florida, and put us up in a famous 5-star hotel. By this time I was starting to have more questions about the neutrality of the research, under these circumstances. Something that struck me during that trip was my observations of the company reps meticulously preparing their video presentation for us -- they were preparing a show; it was basically a slick info-mercial, sound-effects and all. I was also struck by the fact that no one around me seemed to notice this or have a critical view of it. I felt like, on the one hand, we were being treated like royalty, but on the other hand we were simply being bought. I realize that it is good for companies to make participation in research projects attractive to everyone involved. It can be frustrating work to recruit patients for clinical studies, and many psychiatrists would rather not take time away from other aspects of work to participate in research. Research is important, and maybe travel & adventure could be fair aspects to enjoying the life of a researcher. BUT -- the travel is really not necessary at all. It is an extravagance. Information and training about a research protocol can happen locally. Other communication can happen over the phone, over the net, or over a video link. The other expensive extravagances just reduce the neutrality of the study, and also bias all participants (many of whom are "leaders in the field" who often influence other practitioners) to have and convey a more favourable view of the company's products, irrespective of the results of the particular study.
I think it would be interesting to have disclosures in research papers not only about the authors' affiliations with, or income received from, the drug companies, but also about the travel expenses paid by the companies for meetings pertaining to the study in question.
A more mundane aspect of industry sponsorship, during my residency between 1995-2000, was the weekly phenomenon of the "drug lunch." Basically, during almost every group meeting or rounds, food would be provided by a drug rep--usually quite a tasty lunch.
A continuing aspect of industry sponsorship is the distribution of free samples. At times I find this quite useful, to help someone get started on something right away, without the time or expense of a pharmacy visit. At other times, people have not been able to afford medication (the most common psychiatric medications are available for free in BC, through a government plan, but many more exotic medications are not covered by this plan): in some cases, the drug companies have provided a free "compassionate release" supply of medication for extended periods of time. Yet, I recognize that these phenomena lead to bias. The presence of a particular sample can influence the choice of which particular medication to recommend, particularly when the different choices are all similarly effective.
I realize this post may come off sounding like some kind of anti-corporate rant. I don't want to slam corporations too much though -- thanks to large companies, we have many more treatments which can profoundly improve quality of life, and which can save many lives. Profit-oriented motivations can drive productivity, competition, and better research. It's just that we can't be swept into the current of advertising and other biased persuasive tactics which companies use to sell more of their products. We can sympathize with the reality that companies behave this way, but as health care professionals, or as individuals contemplating whether or not to take a particular medication or other treatment, we need to have information which is clear, unbiased, as objective as possible.
Sunday, November 9, 2008
Biases associated with Industry-funded research
There is evidence that research studies sponsored by pharmaceutical companies produce biased results. Here is a collection of papers supporting this claim:
http://ajp.psychiatryonline.org/cgi/content/full/162/10/1957
This paper from the American Journal of Psychiatry reports that industry-sponsored studies are 4.9 times more likely to show a benefit for their product.
http://www.ncbi.nlm.nih.gov/pubmed/15588746
In this paper, an association is shown between industry involvement in a study, and the study showing a larger benefit for the industry's product (in this case, with newer antipsychotics).
http://bjp.rcpsych.org/cgi/content/full/191/1/82
In this study, the findings suggest that the direct involvement of a drug company employee in the authorship of a study leads to a higher likelihood of the study reporting a favourable outcome for the drug company product.
http://jama.ama-assn.org/cgi/content/full/290/7/921
This is a very important JAMA article, showing that industry-funded studies are more likely to recommend the experimental treatment (i.e. favouring their product) than non-industry studies, even when the data are the same.
I do not publish this post to be "anti-drug company". I think the pharmaceutical industry is wonderful. The wealth of many of these companies may allow them to do very difficult, hi-tech research with the help of some of the world's best scientists. The industry has produced many drugs that have vastly improved people's lives, and that have saved many lives.
Even the profit-driven-ness of companies can be understandable and healthy...it may lead to economic pressure to produce treatments that are actually effective, and that are superior to the products of the competitors.
Sometimes the research trials necessary to show the benefit of newer treatments require such a large scale that they are very expensive...sometimes only a large drug company actually has enough money to sponsor trials of this type.
BUT...the profit-driven orientation of companies may cause them to take short-cuts to maximize profits...
-marketing efforts can distort the facts about effectiveness of a new treatment
-and involvement in comparative trials by eager, profit-driven industry, very likely biases results, and biases the clinical behaviour of doctors
A solution to some of these problems is a requirement for frank transparency always, when publishing research papers, in terms of industry involvement.
Another solution is to have more government funding for independent, unbiased large-scale clinical trials.
And another solution is for all of us to be better informed about this issue!
http://ajp.psychiatryonline.org/cgi/content/full/162/10/1957
This paper from the American Journal of Psychiatry reports that industry-sponsored studies are 4.9 times more likely to show a benefit for their product.
http://www.ncbi.nlm.nih.gov/pubmed/15588746
In this paper, an association is shown between industry involvement in a study, and the study showing a larger benefit for the industry's product (in this case, with newer antipsychotics).
http://bjp.rcpsych.org/cgi/content/full/191/1/82
In this study, the findings suggest that the direct involvement of a drug company employee in the authorship of a study leads to a higher likelihood of the study reporting a favourable outcome for the drug company product.
http://jama.ama-assn.org/cgi/content/full/290/7/921
This is a very important JAMA article, showing that industry-funded studies are more likely to recommend the experimental treatment (i.e. favouring their product) than non-industry studies, even when the data are the same.
I do not publish this post to be "anti-drug company". I think the pharmaceutical industry is wonderful. The wealth of many of these companies may allow them to do very difficult, hi-tech research with the help of some of the world's best scientists. The industry has produced many drugs that have vastly improved people's lives, and that have saved many lives.
Even the profit-driven-ness of companies can be understandable and healthy...it may lead to economic pressure to produce treatments that are actually effective, and that are superior to the products of the competitors.
Sometimes the research trials necessary to show the benefit of newer treatments require such a large scale that they are very expensive...sometimes only a large drug company actually has enough money to sponsor trials of this type.
BUT...the profit-driven orientation of companies may cause them to take short-cuts to maximize profits...
-marketing efforts can distort the facts about effectiveness of a new treatment
-and involvement in comparative trials by eager, profit-driven industry, very likely biases results, and biases the clinical behaviour of doctors
A solution to some of these problems is a requirement for frank transparency always, when publishing research papers, in terms of industry involvement.
Another solution is to have more government funding for independent, unbiased large-scale clinical trials.
And another solution is for all of us to be better informed about this issue!
Subscribe to:
Posts (Atom)