Here's another interesting link from "the last psychiatrist" blog:
http://thelastpsychiatrist.com/2006/11/post_2.html#more
I agree with many of his points.
But here are a few counterpoints, in order:
1.) I think some psychiatrists talk too little. There's a difference between nervous or inappropriate chatter diluting or interrupting a patient's opportunity to speak, and an engaged dialog focusing on process or content of a problem. There is a trend in psychiatric practice, founded or emphasized by psychoanalysis, that the therapist is to be nearly silent. Sometimes I think these silences are unhelpful, unnecessary, inefficient, even harmful. There are some patients I can think of for whom silence in a social context is extremely uncomfortable, and certainly not an opportunity for them to learn in therapy. Therapy in some settings can be an exercise in meaningful dialog, active social skills practice, or simply a chance to converse or laugh spontaneously.
I probably speak too much, myself--and I need to keep my mouth shut a little more often. I have to keep an eye on this one.
It is probably better for most psychiatrists to err on the side of speaking too little, I would agree. An inappropriately overtalkative therapist is probably worse than an inappropriately undertalkative one. But I think many of us have been taught to be so silent that we cannot be fully present, intuitively, personally, intellectually, to help someone optimally. In these cases, sometimes the tradition of therapeutic silence can suppress healthy spontaneity, positivity, and humour in a way which only delays or obstructs a patient's therapy experience.
2) I agree strongly with this one--especially when history details are ruminated about interminably during the first few sessions.
However, I do think that a framework to be comprehensive is important. And sometimes it is valuable, in my opinion, to entirely review the whole history, after seeing a patient for a year, or for many years. There is so much focus on comprehensive history-taking during the first few sessions, or the first hour, that we forget to revisit or deepen this understanding after knowing a patient much better, later on. Sometimes whole elements of a patient's history can be forgotten, because they were only talked about once, during the first session.
There is a professional standard of doing a "comprehensive psychiatric history" in a single interview of no longer than 55 minutes. There may even be a certain bravado among residents, or an admiration for someone who can "get the most information" in that single hour. I object to this being a dogmatic standard. A psychiatric history, as a personal story, may take years to understand well, and even then the story is never complete. It can be quite arrogant to assume that a single brief interview (which, if optimal exchange of "facts" is to take place, can sound like an interrogation) can lead to a comprehensive understanding of a patient.
I do believe, though, that certain elements of comprehensiveness should be aimed for, and aimed for early. For example, it is very important to ask about someone's medical ailments, about substance use, about various symptoms the person may be too embarrassed to mention unless asked directly, etc. Otherwise an underlying problem could be entirely missed, and the ensuing therapy could be very ineffective or even deleterious.
Also, some individual patients may feel a benefit or relief to go through a very comprehensive historical review in the first few sessions, with the structure of the dialog supplied mainly from the therapist. Other individual patients may feel more comfortable, or find it more beneficial, to supply the structure of their story themselves. So maybe it's important not to make strong imperative statements on this question: as with so many other things in psychiatry, a lot depends on the individual situation.
3. I think it's important not to ignore ANY habitual behavior that could be harmful. Yet perhaps some times are better than others to address or push for things like smoking or soft-drink cessation: a person with a chronically unstable mood disorder may require improved mood stability (some of which may actually come from cigarette smoking, in a short-term sense anyway), before they are able to embark on a quit-smoking plan.
4. not much to add here
5. Well, point taken. I've written a post about psychiatry and politics before, and suggested a kind of detached, "monastic role." But on the other hand, any person or group may have a certain influence--the article here suggests basically that it's none of psychiatry's business to deal with political or social policy. Maybe not. But the fact is, psychiatry does have some influence to effect social change. And, in my opinion, it is obvious that social and political dynamics are driven by forces that are similar to the dynamics which operate in a single family, or in an individual's mind. So, if there is any wisdom in psychiatry, it could certainly be applicable to the political arena. Unfortunately, it appears to me that psychiatrists I have seen getting involved in politics or other group dynamics are just as swept up in dysfunctional conflict, etc. as anyone else.
But if there's something that psychiatry can do to help with war or world hunger, etc. -- why not? In some historic situations an unlikely organized group has come to the great aid of a marginalized or persecuted group in need of relief or justice, even though the organized group didn't necessarily have any specialized knowledge of the matter they were dealing with.
6. I strongly agree. I prefer to offer therapy to most people I see. And I think most people do not have adequate opportunities to experience therapy. Yet I do also observe that many individuals could be treated with a medication prescribed by a gp, and simply experience resolution of their symptoms. Subsequent "therapy" is done by the individual in their daily life, and does not require a "therapist." In these cases, the medication may not be needed anymore, maybe after a year or so. Sometimes therapists may end up offering something that isn't really needed, or may aggrandize the role or importance of "therapy" (we studied all those years to learn to be therapists, after all--therefore a therapist's view on the matter may be quite biased), when occasionally the best therapy of all could simply be self-provided. Yet, of course, many situations are not so simple at all, and that's where a therapy experience can be very, very important. I support the idea of respecting the patient's individual wishes on this matter, after providing the best possible presentation of benefits and risks of different options. Of course, we're all biased in how we understand this benefit/risk profile.
7. some interesting points here...but subject to debate. Addressing these complex subjects in an imperative manner makes me uncomfortable.
8. polypharmacy should certainly not be a norm, though intelligent use of combination therapies, in conjunction with a clear understanding of side-effect risks, can sometimes be helpful. Some of the statements made in this section have actually not been studied well, for example it makes no pharmacological sense to combine two different SSRI antidepressants at the same time. But there has not been a body of research data PROVING that such a combination is in fact ineffectual. Therefore, before we scoff at the practitioner who prescribes two SSRIs at once, I think we should look at the empirical result--since there are no prospective randomized studies, the best we can do is see whether the individual patient is feeling better, or not.
9. I'm not a big fan of "diagnosis", but sometimes, and for some individuals, it can be part of a very helpful therapy experience, to be able to give a set of problems a name. This name, this category, may lead the person to understand more about causes & solutions. Narrative therapy makes a good use, I think, of "naming" (a variant of "diagnosing") as a very useful therapeutic construct.
10. There isn't a number 10 here, but the comments at the end of this article were good.
a discussion about psychiatry, mental illness, emotional problems, and things that help
Monday, October 5, 2009
Biased Presentation of statistical data: LOCF vs. MMRM
This is a brief posting about biostatistics.
In clinical trials, some subjects drop out.
The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.
LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.
One technique or the other may generate different conclusions, different numbers to present.
The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:
http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more
While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0
In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).
In clinical trials, some subjects drop out.
The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.
LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.
One technique or the other may generate different conclusions, different numbers to present.
The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:
http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more
While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0
In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).
Which is better, a simple drug or a complex drug?
Here is another critique of medication marketing trends in psychiatry:
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
Pregnancy & Depressive Relapse
I was looking at an article in JAMA from 2006, which was about pregnant women taking antidepressants. They were followed through pregnancy, and depressive relapses were related to changes in antidepressant dose. Here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
Tuesday, September 29, 2009
Astronomical Photographs
For something completely different--
Have a look at NASA's "astronomy picture of the day" site: http://apod.nasa.gov/apod/
It's interesting, awe-inspiring--and I hope therapeutic--to be reminded of things much larger than ourselves.
Here are some of my favourite pictures from the NASA site:
the sun:
http://antwrp.gsfc.nasa.gov/apod/ap030418.html
http://antwrp.gsfc.nasa.gov/apod/ap021114.html
http://antwrp.gsfc.nasa.gov/apod/ap061204.html
http://antwrp.gsfc.nasa.gov/apod/ap000928.html
http://antwrp.gsfc.nasa.gov/apod/ap080924.html
galaxies:
http://antwrp.gsfc.nasa.gov/apod/ap081012.html
http://antwrp.gsfc.nasa.gov/apod/ap080927.html
http://antwrp.gsfc.nasa.gov/apod/ap050112.html
http://antwrp.gsfc.nasa.gov/apod/ap090701.html
jupiter:
http://antwrp.gsfc.nasa.gov/apod/ap090106.html
Have a look at NASA's "astronomy picture of the day" site: http://apod.nasa.gov/apod/
It's interesting, awe-inspiring--and I hope therapeutic--to be reminded of things much larger than ourselves.
Here are some of my favourite pictures from the NASA site:
the sun:
http://antwrp.gsfc.nasa.gov/apod/ap030418.html
http://antwrp.gsfc.nasa.gov/apod/ap021114.html
http://antwrp.gsfc.nasa.gov/apod/ap061204.html
http://antwrp.gsfc.nasa.gov/apod/ap000928.html
http://antwrp.gsfc.nasa.gov/apod/ap080924.html
galaxies:
http://antwrp.gsfc.nasa.gov/apod/ap081012.html
http://antwrp.gsfc.nasa.gov/apod/ap080927.html
http://antwrp.gsfc.nasa.gov/apod/ap050112.html
http://antwrp.gsfc.nasa.gov/apod/ap090701.html
jupiter:
http://antwrp.gsfc.nasa.gov/apod/ap090106.html
Subscribe to:
Posts (Atom)