This is a brief posting about biostatistics.
In clinical trials, some subjects drop out.
The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.
LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.
One technique or the other may generate different conclusions, different numbers to present.
The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:
http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more
While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0
In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).
a discussion about psychiatry, mental illness, emotional problems, and things that help
Monday, October 5, 2009
Which is better, a simple drug or a complex drug?
Here is another critique of medication marketing trends in psychiatry:
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
Pregnancy & Depressive Relapse
I was looking at an article in JAMA from 2006, which was about pregnant women taking antidepressants. They were followed through pregnancy, and depressive relapses were related to changes in antidepressant dose. Here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
Tuesday, September 29, 2009
Astronomical Photographs
For something completely different--
Have a look at NASA's "astronomy picture of the day" site: http://apod.nasa.gov/apod/
It's interesting, awe-inspiring--and I hope therapeutic--to be reminded of things much larger than ourselves.
Here are some of my favourite pictures from the NASA site:
the sun:
http://antwrp.gsfc.nasa.gov/apod/ap030418.html
http://antwrp.gsfc.nasa.gov/apod/ap021114.html
http://antwrp.gsfc.nasa.gov/apod/ap061204.html
http://antwrp.gsfc.nasa.gov/apod/ap000928.html
http://antwrp.gsfc.nasa.gov/apod/ap080924.html
galaxies:
http://antwrp.gsfc.nasa.gov/apod/ap081012.html
http://antwrp.gsfc.nasa.gov/apod/ap080927.html
http://antwrp.gsfc.nasa.gov/apod/ap050112.html
http://antwrp.gsfc.nasa.gov/apod/ap090701.html
jupiter:
http://antwrp.gsfc.nasa.gov/apod/ap090106.html
Have a look at NASA's "astronomy picture of the day" site: http://apod.nasa.gov/apod/
It's interesting, awe-inspiring--and I hope therapeutic--to be reminded of things much larger than ourselves.
Here are some of my favourite pictures from the NASA site:
the sun:
http://antwrp.gsfc.nasa.gov/apod/ap030418.html
http://antwrp.gsfc.nasa.gov/apod/ap021114.html
http://antwrp.gsfc.nasa.gov/apod/ap061204.html
http://antwrp.gsfc.nasa.gov/apod/ap000928.html
http://antwrp.gsfc.nasa.gov/apod/ap080924.html
galaxies:
http://antwrp.gsfc.nasa.gov/apod/ap081012.html
http://antwrp.gsfc.nasa.gov/apod/ap080927.html
http://antwrp.gsfc.nasa.gov/apod/ap050112.html
http://antwrp.gsfc.nasa.gov/apod/ap090701.html
jupiter:
http://antwrp.gsfc.nasa.gov/apod/ap090106.html
N-Acetylcysteine for treatment of compulsive disorders
N-acetylcysteine is an antioxidant which modulates the glutamate system in the brain. Glutamate is actually the most prevalent neurotransmitter in the brain, and generally has strongly activating effects on nerve cells.
A recent study in Archives of General Psychiatry described groups of individuals with compulsive hair-pulling behavior (trichotillomania), randomized to receive either placebo, or N-acetylcysteine 1200 mg/day, then up to 2400 mg/day, over 12 weeks:
http://www.ncbi.nlm.nih.gov/pubmed/19581567
The N-acetylcysteine group had about 50% reduction in hair-pulling behaviour, with no change in the placebo group. Those in the N-acetylcysteine group did not report any side effects. In fact, the only side effects were among those in the placebo group.
The same author published a study in 2008 showing a substantial improvement in compulsive gambling behavior in a group given NAC at an average dose of about 1500 mg/d:
http://www.ncbi.nlm.nih.gov/pubmed/17445781
A very preliminary study showed that NAC may have some promise in treating cocaine addiction:
http://www.ncbi.nlm.nih.gov/pubmed/17113207
NAC has shown some promise as an adjunctive treatment for chronic schizophrenia; in this study the dose was 1000 mg twice daily, over 24 weeks. Once again, there were no side-effects. As I look at the body of the paper, I see that there was a definite favorable effect from the NAC compared to placebo, in several domains, but the size of the effect seemed clinically modest:
http://www.ncbi.nlm.nih.gov/pubmed/18436195
So NAC appears to be an appealing therapy for a variety of frequent, and often difficult-to-treat psychiatric symptoms. There do not appear to be side effect problems.
At this point, NAC can be obtained from health food stores in Canada, as a nutritional supplement. It is also on the prescription formulary in an injectable form for treating acetaminophen toxicity.
A recent study in Archives of General Psychiatry described groups of individuals with compulsive hair-pulling behavior (trichotillomania), randomized to receive either placebo, or N-acetylcysteine 1200 mg/day, then up to 2400 mg/day, over 12 weeks:
http://www.ncbi.nlm.nih.gov/pubmed/19581567
The N-acetylcysteine group had about 50% reduction in hair-pulling behaviour, with no change in the placebo group. Those in the N-acetylcysteine group did not report any side effects. In fact, the only side effects were among those in the placebo group.
The same author published a study in 2008 showing a substantial improvement in compulsive gambling behavior in a group given NAC at an average dose of about 1500 mg/d:
http://www.ncbi.nlm.nih.gov/pubmed/17445781
A very preliminary study showed that NAC may have some promise in treating cocaine addiction:
http://www.ncbi.nlm.nih.gov/pubmed/17113207
NAC has shown some promise as an adjunctive treatment for chronic schizophrenia; in this study the dose was 1000 mg twice daily, over 24 weeks. Once again, there were no side-effects. As I look at the body of the paper, I see that there was a definite favorable effect from the NAC compared to placebo, in several domains, but the size of the effect seemed clinically modest:
http://www.ncbi.nlm.nih.gov/pubmed/18436195
So NAC appears to be an appealing therapy for a variety of frequent, and often difficult-to-treat psychiatric symptoms. There do not appear to be side effect problems.
At this point, NAC can be obtained from health food stores in Canada, as a nutritional supplement. It is also on the prescription formulary in an injectable form for treating acetaminophen toxicity.
Subscribe to:
Posts (Atom)