Tuesday, October 13, 2009

Increasing anxiety in recent decades...continued

This is a sequel to a previous posting (http://garthkroeker.blogspot.com/2009/06/increasing-anxiety-in-recent-decades.html)

A visitor suggested the following July 2009 article to look at regarding this subject--here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/19660164

The author, "Ian Dowbiggin, PhD", is a history professor at the University of Prince Edward Island.

I found the article quite judgmental and poorly informed.

I thought there were some good points, exploring the interaction of social dynamics, political factors, secondary gain, etc. in the evolution of diagnostic labels; and perhaps exploring the idea that we may at times over-pathologize normal human experiences, character traits, or behaviours.

But, basically the author's message seems to be that we cling to diagnostic labels to avoid taking personal responsibility for our problems--and that therapists, the self-help movement, pharmaceutical companies, etc. are all involved in perpetuating this phenomenon.

Another implied point of view was that a hundred years ago, people might well have experienced similar symptoms, but would have accepted these symptoms as part of normal life, and carried on (presumably without complaint).

To quote the author:

"The overall environment of modern day life...bestows a kind of legitimacy on the pool of
anxiety-related symptoms"

This implies that some symptoms are "legitimate" and others are not, and that it is some kind of confusing or problematic feature of modern society that anxiety symptoms are currently considered "legitimate."

I am intensely annoyed by opinion papers which do not explore the other side of the issues--

here's another side to the issue:

1) perhaps, a hundred years ago, people suffered just as much, or worse, but lacked any sort of help for what was bothering them. They therefore lived with more pain, less productivity, less enjoyment, less of a voice, more isolation, and in most cases died at a younger age.

2) The development of a vocabulary to describe psychological distress does not necessarily cause more distress. The vocabulary helps us to identify experiences that were never right in the first place. The absence of a PTSD label does not mean that symptoms secondary to trauma did not exist before the 20th century. The author somewhat mockingly suggests that some people misuse a PTSD or similar label--that perhaps only those subject to combat trauma are entitled to use it, while those subject to verbal abuse in home life are not.

The availability of financial compensation related to PTSD has undoubtedly affected the number of people describing symptoms. But the author appears to leave readers with the impression that those seeking compensation via PTSD claims are "milking the system" (this is the subtitle of the PTSD section of this paper). There is little doubt that factitious and malingered symptoms are common, particularly when there is overt secondary gain. And the issue of how therapeutic it is to have long-term financial compensation for any sort of problem, is another matter for an evidence-based and politically charged debate. But to imply that all those who make financial claims regarding PTSD are "milking the system" seems very disrespectful to me. And to imply that a system which offers such compensation is somehow problematic again seems comparable to saying that the availability of fire or theft insurance is problematic. A constructive point of view on the matter, as far as I'm concerned, would be to consider ways to make compensation systems fair and more resistant to factitious or malingered claims.

With regard to social anxiety -- it may well be that "bashfulness" has been valued and accepted in many past--and present--cultures. But I suspect that the social alienation, social frustration, loneliness, and lack of ability to start new friendships, new conversations, or to find mates, have been phenomena similarly prevalent over the centuries. Our modern terminology suggests ways for a person who is "bashful" to choose for himself or herself, whether to stoically and silently accept this set of phenomena, or to address it as a medical problem, with a variety of techniques to change the symptoms. In this way the language can be empowering, leading to the discovery and nurturance of a voice, rather than leading to a sense of "victimhood."

Perhaps the lack of a vocabulary to articulate distress causes a spurious impression that the distress does not exist, or is not worthy of consideration. A historical analogy might be something along the lines of this: terms such as "molecule", "Uranium", or "electromagnetic field," may not have been used before 1701, 1797, or 1820, but this was merely a product of ignorance, not evidence of the non-existence of these phenomena in the 1600's and prior.

It may well be true that many individuals misuse the vocabulary, or may exploit it for secondary gain. And it may well be true that some diagnostic labels introduce an iatrogenic or factitious illness (the multiple personality disorder issue could be debated along these lines). But to imply that the vocabulary itself is harmful to society is akin to saying that fire insurance is harmful, since some people misuse it by deliberately burning their houses down.


3) Similarly, the so-called self-help movement may be part of some individuals fleeing into self-pathologizing language, while ironically neglecting a healthy engagement with their lives. But in most cases, it has actually helped people to recognize, label, and improve their problems. For a start on some evidence to look at regarding this, see the following reference to a meta-analysis on self-help for anxiety disorders: http://www.ncbi.nlm.nih.gov/pubmed/16942965).

---
So, in conclusion, it is interesting to hear a different point of view. But I would expect a distinguished scholar to provide a much more balanced and insightful debate in such a paper, especially when it is published in a journal which is supposed to have high standards.

And I would certainly expect a much more thorough exploration of research evidence. The presence of 35 references in this paper may fool some readers into thinking that a reasonable survey of the research has been undertaken. Almost all of the references are themselves opinion pieces which merely support the author's point of view.

Thursday, October 8, 2009

Is Seroquel XR better than generic quetiapine?

A supplement written by Christoph Correll for The Canadian Journal of Diagnosis (September 2009) was delivered--free--into my office mailbox the other day.

It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.

First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.

So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).

A much more meaningful chart would show one of the following:

1) the receptor blockades for each drug when the drug is administered at typical doses

2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).

The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.

Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:

1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).

2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.

I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.

Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!


Regular quetiapine has gone generic.

Quetiapine extended-release (Seroquel XR) has not.

I am bothered by the possibility of bias in Correll's article.

It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.

In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.

Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.

Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.

BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.

One individual might respond better to one drug, compared to another.

The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.

(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )

Monday, October 5, 2009

Hallucinations

Hallucinations are perceptions which take place in the absence of a stimulus from the peripheral or sensory nervous system.

They may be classified in a variety of different ways (this is an incomplete list):
1)by sensory modality
a) auditory: these are most common, and may be perceived as voices speaking or mumbling; musical sounds; or other more cacophonous sounds
b) visual: these can occur more commonly in delirious states or medical illnesses affecting the brain. Many people experience normal, but unsettling, visual hallucinations, just when falling asleep or waking up.
c) tactile: these are most common in chemical intoxication syndromes, such as with cocaine.
d) olfactory: more common in medical illness

2) by positionality
-when describing hallucinated voices, if the voices are perceived to originate inside the head, or to not have any perceived origin, then they could be called "pseudohallucinations." If the voices are perceived to originate from a particular place, such as from the ceiling or from across the room, then they could be called "hallucinations" or "true hallucinations." This terminology has been used to distinguish between the hallucinations in schizophrenia and psychotic mood disorders (which are typically "true hallucinations") and those experienced in non-psychotic disorders (pseudohallucinations are more typically--though not invariably--associated with dissociative disorders, borderline personality, or PTSD).

3) by insight
An individual experiencing a "psychotic hallucination" will attribute the phenomenon to stimuli outside of the brain. An individual experiencing a "non-psychotic hallucination" will attribute the phenomenon to his or her own brain activity, and recognize the absence of an external stimulus to account for the experience. In most cases, "insight" fluctuates on a continuum, and many individuals experiencing hallucinations will have some intellectual understanding of their perceptions being hallucinatory, but still feel on a visceral level that the perceptions are "real."

4) by character
Voices in particular can be described in a variety of ways. So-called "first rank symptoms of schizophrenia" include hallucinated voices which comment on a person's behavior, or include several voices which converse with each other.
The quality of the voice can vary, with harsh, angry, critical tones more common in psychotic depression, and neutral emotionality more common in schizophrenic states.


--all of these above descriptions are incomplete, and associations between one type of hallucination and a specific "diagnosis" are imperfect. A great deal of variation exists--

It is probably true that some hallucinations are factitious (i.e. the person is not actually hallucinating, despite claiming to), but of course this would be virtually impossible to prove. Something like functional brain imaging might be an interesting, though impractical, tool, to examine this phenomenon. People with psychotic disorders or borderline personality might at times describe factitious hallucinatory phenomena in order to communicate emotional distress or need to caregivers. Or sometimes the phenomena may convey some type of figurative meaning. The motivation to do this might not always be conscious.

There are a variety of ways to treat hallucinations.

In my opinion, the single most effective treatment is an antipsychotic medication. Hallucinations due to almost any cause are likely to diminish with antipsychotic medication treatment.

There is evolving evidence that CBT and other psychotherapy can help with hallucinations. Here are some references:
http://www.ncbi.nlm.nih.gov/pubmed/19176275
http://www.ncbi.nlm.nih.gov/pubmed/9827323

Some individuals may not be bothered by their hallucinations. In this case, it may sometimes be more the physician's agenda than the patient's to "treat" the symptom. Yet, it is probably true that active hallucinations in psychotic disorders are harbingers of other worsening symptoms, so it may be important to treat the symptom early, even if it is not troublesome.

Other types of behavioral tactics can help, including listening to music, wearing ear plugs, other distractions, etc. In dealing with pseudohallucinations or non-psychotic hallucinations, "mindfulness" exercises may be quite important. A well-boundaried psychodynamically-oriented therapy structure could be very helpful for non-psychotic hallucinations or pseudohallucinations associated with borderline personality dynamics or PTSD. Care would need to be taken, in these cases, not to focus excessively or "deeply" on the hallucinations, particularly without the patient's clear consent, since such a dialog could intensify the symptoms.

Mediterranean diet is good for your brain

In this month's Archives of General Psychiatry, a study by Sanchez-Villegas et al. is published showing a strong association between lower rates of depression, and consuming a Mediterranean diet (lots of vegetables, fruits, nuts, whole grains, and fish, with low intake of meat, moderate intake of alcohol & dairy, and lots of monounsaturated fatty acids compared to saturated fatty acids). Data was gathered prospectively during a period averaging over 4 years, and was based on following about 10 000 initially healthy students in Spain who reported food intake on questionnaires.

I'll have to look closely at the full text of the article. I'm interested to consider the question of whether the results strongly suggest causation, or whether the results could be due to non-causal association. That is, perhaps people in Spain with a higher tendency to become depressed tend to choose non-Mediterranean diets. Another issue is cultural: the study was done in Spain, where a Mediterranean diet may be associated with certain--perhaps more traditional--cultural or subcultural features, and this cultural factor may then mediate the association with depressive risk.

In any case, in the meantime, given the preponderance of other data showing health benefits from a Mediterranean-style diet, I wholeheartedly (!) recommend consuming more nuts, vegetables, olive oil, fish, whole grains, and fruit; and less red meat.

The need for CME

Here's another article from "the last psychiatrist" on CME:
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more

Another insightful article, but pretty cynical!

But here are some of my opinions on this one:

1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.

I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.

Opinions on mistakes psychiatrists make

Here's another interesting link from "the last psychiatrist" blog:

http://thelastpsychiatrist.com/2006/11/post_2.html#more


I agree with many of his points.

But here are a few counterpoints, in order:

1.) I think some psychiatrists talk too little. There's a difference between nervous or inappropriate chatter diluting or interrupting a patient's opportunity to speak, and an engaged dialog focusing on process or content of a problem. There is a trend in psychiatric practice, founded or emphasized by psychoanalysis, that the therapist is to be nearly silent. Sometimes I think these silences are unhelpful, unnecessary, inefficient, even harmful. There are some patients I can think of for whom silence in a social context is extremely uncomfortable, and certainly not an opportunity for them to learn in therapy. Therapy in some settings can be an exercise in meaningful dialog, active social skills practice, or simply a chance to converse or laugh spontaneously.

I probably speak too much, myself--and I need to keep my mouth shut a little more often. I have to keep an eye on this one.

It is probably better for most psychiatrists to err on the side of speaking too little, I would agree. An inappropriately overtalkative therapist is probably worse than an inappropriately undertalkative one. But I think many of us have been taught to be so silent that we cannot be fully present, intuitively, personally, intellectually, to help someone optimally. In these cases, sometimes the tradition of therapeutic silence can suppress healthy spontaneity, positivity, and humour in a way which only delays or obstructs a patient's therapy experience.

2) I agree strongly with this one--especially when history details are ruminated about interminably during the first few sessions.
However, I do think that a framework to be comprehensive is important. And sometimes it is valuable, in my opinion, to entirely review the whole history, after seeing a patient for a year, or for many years. There is so much focus on comprehensive history-taking during the first few sessions, or the first hour, that we forget to revisit or deepen this understanding after knowing a patient much better, later on. Sometimes whole elements of a patient's history can be forgotten, because they were only talked about once, during the first session.

There is a professional standard of doing a "comprehensive psychiatric history" in a single interview of no longer than 55 minutes. There may even be a certain bravado among residents, or an admiration for someone who can "get the most information" in that single hour. I object to this being a dogmatic standard. A psychiatric history, as a personal story, may take years to understand well, and even then the story is never complete. It can be quite arrogant to assume that a single brief interview (which, if optimal exchange of "facts" is to take place, can sound like an interrogation) can lead to a comprehensive understanding of a patient.

I do believe, though, that certain elements of comprehensiveness should be aimed for, and aimed for early. For example, it is very important to ask about someone's medical ailments, about substance use, about various symptoms the person may be too embarrassed to mention unless asked directly, etc. Otherwise an underlying problem could be entirely missed, and the ensuing therapy could be very ineffective or even deleterious.

Also, some individual patients may feel a benefit or relief to go through a very comprehensive historical review in the first few sessions, with the structure of the dialog supplied mainly from the therapist. Other individual patients may feel more comfortable, or find it more beneficial, to supply the structure of their story themselves. So maybe it's important not to make strong imperative statements on this question: as with so many other things in psychiatry, a lot depends on the individual situation.

3. I think it's important not to ignore ANY habitual behavior that could be harmful. Yet perhaps some times are better than others to address or push for things like smoking or soft-drink cessation: a person with a chronically unstable mood disorder may require improved mood stability (some of which may actually come from cigarette smoking, in a short-term sense anyway), before they are able to embark on a quit-smoking plan.

4. not much to add here
5. Well, point taken. I've written a post about psychiatry and politics before, and suggested a kind of detached, "monastic role." But on the other hand, any person or group may have a certain influence--the article here suggests basically that it's none of psychiatry's business to deal with political or social policy. Maybe not. But the fact is, psychiatry does have some influence to effect social change. And, in my opinion, it is obvious that social and political dynamics are driven by forces that are similar to the dynamics which operate in a single family, or in an individual's mind. So, if there is any wisdom in psychiatry, it could certainly be applicable to the political arena. Unfortunately, it appears to me that psychiatrists I have seen getting involved in politics or other group dynamics are just as swept up in dysfunctional conflict, etc. as anyone else.
But if there's something that psychiatry can do to help with war or world hunger, etc. -- why not? In some historic situations an unlikely organized group has come to the great aid of a marginalized or persecuted group in need of relief or justice, even though the organized group didn't necessarily have any specialized knowledge of the matter they were dealing with.

6. I strongly agree. I prefer to offer therapy to most people I see. And I think most people do not have adequate opportunities to experience therapy. Yet I do also observe that many individuals could be treated with a medication prescribed by a gp, and simply experience resolution of their symptoms. Subsequent "therapy" is done by the individual in their daily life, and does not require a "therapist." In these cases, the medication may not be needed anymore, maybe after a year or so. Sometimes therapists may end up offering something that isn't really needed, or may aggrandize the role or importance of "therapy" (we studied all those years to learn to be therapists, after all--therefore a therapist's view on the matter may be quite biased), when occasionally the best therapy of all could simply be self-provided. Yet, of course, many situations are not so simple at all, and that's where a therapy experience can be very, very important. I support the idea of respecting the patient's individual wishes on this matter, after providing the best possible presentation of benefits and risks of different options. Of course, we're all biased in how we understand this benefit/risk profile.
7. some interesting points here...but subject to debate. Addressing these complex subjects in an imperative manner makes me uncomfortable.
8. polypharmacy should certainly not be a norm, though intelligent use of combination therapies, in conjunction with a clear understanding of side-effect risks, can sometimes be helpful. Some of the statements made in this section have actually not been studied well, for example it makes no pharmacological sense to combine two different SSRI antidepressants at the same time. But there has not been a body of research data PROVING that such a combination is in fact ineffectual. Therefore, before we scoff at the practitioner who prescribes two SSRIs at once, I think we should look at the empirical result--since there are no prospective randomized studies, the best we can do is see whether the individual patient is feeling better, or not.
9. I'm not a big fan of "diagnosis", but sometimes, and for some individuals, it can be part of a very helpful therapy experience, to be able to give a set of problems a name. This name, this category, may lead the person to understand more about causes & solutions. Narrative therapy makes a good use, I think, of "naming" (a variant of "diagnosing") as a very useful therapeutic construct.

10. There isn't a number 10 here, but the comments at the end of this article were good.

Biased Presentation of statistical data: LOCF vs. MMRM

This is a brief posting about biostatistics.

In clinical trials, some subjects drop out.

The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.

LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.

One technique or the other may generate different conclusions, different numbers to present.

The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:

http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more

While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0

In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).

Which is better, a simple drug or a complex drug?

Here is another critique of medication marketing trends in psychiatry:

http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more

I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps

I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).

These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.

By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.

Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).

I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.

So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.

Pregnancy & Depressive Relapse

I was looking at an article in JAMA from 2006, which was about pregnant women taking antidepressants. They were followed through pregnancy, and depressive relapses were related to changes in antidepressant dose. Here's a link to the abstract:

http://www.ncbi.nlm.nih.gov/pubmed/16449615

The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html

Yet the JAMA study is not uninformative.

And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.

It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.

Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.

The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.

Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.

Tuesday, September 29, 2009

Astronomical Photographs

For something completely different--

Have a look at NASA's "astronomy picture of the day" site: http://apod.nasa.gov/apod/

It's interesting, awe-inspiring--and I hope therapeutic--to be reminded of things much larger than ourselves.

Here are some of my favourite pictures from the NASA site:

the sun:
http://antwrp.gsfc.nasa.gov/apod/ap030418.html
http://antwrp.gsfc.nasa.gov/apod/ap021114.html
http://antwrp.gsfc.nasa.gov/apod/ap061204.html
http://antwrp.gsfc.nasa.gov/apod/ap000928.html
http://antwrp.gsfc.nasa.gov/apod/ap080924.html

galaxies:
http://antwrp.gsfc.nasa.gov/apod/ap081012.html
http://antwrp.gsfc.nasa.gov/apod/ap080927.html
http://antwrp.gsfc.nasa.gov/apod/ap050112.html
http://antwrp.gsfc.nasa.gov/apod/ap090701.html

jupiter:
http://antwrp.gsfc.nasa.gov/apod/ap090106.html

N-Acetylcysteine for treatment of compulsive disorders

N-acetylcysteine is an antioxidant which modulates the glutamate system in the brain. Glutamate is actually the most prevalent neurotransmitter in the brain, and generally has strongly activating effects on nerve cells.

A recent study in Archives of General Psychiatry described groups of individuals with compulsive hair-pulling behavior (trichotillomania), randomized to receive either placebo, or N-acetylcysteine 1200 mg/day, then up to 2400 mg/day, over 12 weeks:
http://www.ncbi.nlm.nih.gov/pubmed/19581567

The N-acetylcysteine group had about 50% reduction in hair-pulling behaviour, with no change in the placebo group. Those in the N-acetylcysteine group did not report any side effects. In fact, the only side effects were among those in the placebo group.

The same author published a study in 2008 showing a substantial improvement in compulsive gambling behavior in a group given NAC at an average dose of about 1500 mg/d:
http://www.ncbi.nlm.nih.gov/pubmed/17445781

A very preliminary study showed that NAC may have some promise in treating cocaine addiction:
http://www.ncbi.nlm.nih.gov/pubmed/17113207

NAC has shown some promise as an adjunctive treatment for chronic schizophrenia; in this study the dose was 1000 mg twice daily, over 24 weeks. Once again, there were no side-effects. As I look at the body of the paper, I see that there was a definite favorable effect from the NAC compared to placebo, in several domains, but the size of the effect seemed clinically modest:
http://www.ncbi.nlm.nih.gov/pubmed/18436195

So NAC appears to be an appealing therapy for a variety of frequent, and often difficult-to-treat psychiatric symptoms. There do not appear to be side effect problems.

At this point, NAC can be obtained from health food stores in Canada, as a nutritional supplement.  It is also on the prescription formulary in an injectable form for treating acetaminophen toxicity. 

Friday, September 25, 2009

Randomized Controlled Trials in psychiatry

There is a good debate presented in the September 2009 issue of the Canadian Journal of Psychiatry (pp. 637-643), about the importance of randomized controlled trials in psychiatric research and clinical practice.

Steven Hollon presents a strong case supporting the philosophical foundations of RCT research, while Bruce Wampold presents many good points about the present limitations and weaknesses prevalent in current psychiatric RCT research studies. In particular, Wampold points out that much evidence exists regarding the relevance of the individual therapist (and, I might add, of the individual sense of patient-therapist alliance or connection) in determining therapeutic outcomes, and that this very individual factor may have a stronger influence on outcome than the particular "treatment" being offered (whether it be CBT, psychoanalysis, a medication combination, etc.).

My own view of a lot of the evidence resonates with these ideas. I strongly support the importance of randomized controlled trials in medicine and psychiatry. Yet it often seems to me that many variables are not accounted for. The impact of the individual therapist is one specific factor. If the patient is more comfortable with one therapist than another, than this factor alone may greatly outweigh the effect of the particular style of therapy being offered. Interestingly, this factor may not necessarily depend on the length of experience of the therapist -- sometimes a trainee may have a more positive therapeutic impact than a therapist who has decades of experience. This fact is not surprising to me: a lot of psychotherapy can have a lot to do with the capacity for the therapeutic relationship to grow and be healthy, which may depend substantially on very personal factors in the therapist. This may be humbling to those of us who revere the notion of psychotherapeutic theory being of paramount importance.

The whole of psychiatric theory may, at least in some cases, be less important than the goodness of a single interpersonal connection.

But I do also believe that certain therapeutic techniques are more effective than others. I think that strategies which promote daily long-term psychological work just have to be more effective (along the lines of language learning again). Also I think that strategies which encourage and help a person to face their fears or to move away from destructive habits are more likely to be helpful than strategies which do not look at these issues.

Many other factors are often not controlled (or examined at all) in present psychiatric RCTs, including nutrition, exercise, other self-care activities, supportive relationship involvement, community involvement, altruistic activity, etc.

Another factor that I have considered is the heterogeneity of many studied psychiatric populations. Different individuals with so-called "major depressive disorder" may in fact have different underlying causes for their symptoms; some of these individuals may respond well to one type of treatment, others may respond to something else. I suppose the RCT design remains appropriate in this situation, yet a powerful focus in research, in my opinion, needs to be to examine why some people respond to something, while others don't.

This erratic pattern of response doesn't just happen with individuals in a particular study. There are whole studies in which a well-proven psychiatric treatment (such as an antidepressant) doesn't end up differing from placebo. I don't think such studies show that antidepressants (or other treatments) are ineffective, but I do think it strongly suggests that the current criteria for psychiatric diagnoses are insufficient to predict treatment response as consistently as we need.
Often times, these negative studies are dismissed automatically. In many cases, such studies have been poorly designed, and that is the main problem. But in other cases, I think we need to very carefully examine such negative studies, to understand why they were negative.

This is consistent with another type of scientific rigor (different from the RCT empirical approach): in mathematics, a single counterexample is sufficient to disprove a theorem. If such a counterexample is found, it can be extremely fruitful to examine why it occurred--in this way a new and more valuable theorem can be conceived. The process of generating the disproven theorem was not a waste of time, but could be understood as part of a process to find the accurate theorem. Such examples abound in other fields, such as computer programming--a program or algorithm may work quite well, but generate errors or break down in certain situations. Careful examination of why the errors are taking place is the only way to improve the program, and perhaps also to more deeply understand the problem the program was supposed to solve.

Wednesday, September 16, 2009

Perils of Positive Thinking?

Joanne Wood et al. had an article published in Psychological Science in June 2009. It was a study in which subjects with low self-esteem felt worse after doing various "positive thinking" exercises. Subjects with higher self-esteem felt better with self-affirming statements.

Here is a link to the abstract: http://www.ncbi.nlm.nih.gov/pubmed/19493324


So the study seems to suggest that it could be detrimental to engage in "positive thinking" if you are already having depressive thoughts, or negative thoughts about yourself or your situation. The authors theorize that if you if have a negative view of yourself, then it may simply draw more attention in your mind to your own negative self-view, if you force yourself to make a positive statement about yourself. The positive statement may simply seem ridiculous, unrealistic, unattainable, perhaps a reminder of something you don't have or feel that you cannot ever have.

However, the study is weak, and demonstrates something that most of us could see to be obviously true. The study is cross-sectional, and looks at the effect of a single episode of forced "positive thinking." This is like measuring the effect of marathon training after one single workout, and finding that those already in good shape really enjoyed their workout, while those who hadn't run before felt awful afterward.

Any exercise to change one's mind has to be practiced and repeated over a period of months or years. A single bout of exercise will usually accomplish very little. In fact, it will probably lead to soreness or injury, especially if the exercise is too far away from your current fitness level. I suppose if the initial "exercise" is a gentle and encouraging introduction, without overdoing it, then much more could be accomplished, as it could get one started into a new habit, and encourage hope.

"Positive thinking" exercises would, in my opinion, have to feel realistic and honest in order to be helpful. They may feel somewhat contrived, but I think this is also normal, just as phrases in a new language may initially feel contrived as you practice them.

And, following a sort of language-learning or athletic metaphor again, I think that "positive thinking" exercises cannot simply be repeating trite phrases such as "I am a good person!" Rather, they need to be dialogs in your mind, or with other people -- in which you challenge yourself to generate self-affirming statements, perhaps then listen to your mind rail against them, then generate a new affirming response. It becomes an active conversation in your mind rather than bland repetition of statements you don't find meaningful. This is just like how learning a language requires active conversation.

Self-affirmation may initially be yet another tool which at times helps you get through the hour or the day. But I believe that self-affirming language will gradually become incorporated deeply into your identity, as you practice daily, over a period of years. Actually, I think the "language" itself is not entirely the source of identity change; I think such language acts as a catalyst which resonates with a core of positive identity which already exists within you, and allows it to develop and grow with greater ease. This core of positivity may have been suppressed due to years of depression, environmental adversity, or other stresses.

Monday, September 14, 2009

A list of individuals who developed talents later in life

This is a follow-up to my language-learning metaphor entry.

One comment was about the unlikelihood of mastering a "new language" (literally or metaphorically) if you only start learning beyond childhood or adolescence.

This seems to be a common view.

I always like to look for counterexamples (it's my mathematical way coming out in me):

1) the first one that leapt to my mind is Joseph Conrad, one of the greatest authors in the history of the Engish language. Conrad did not speak a word of English until he was 21. He began writing in English at age 32. His first published works came out when he was about 37. In order to learn English, he did not attend language classes or read grammar books, but chose to live and work in an English-speaking environment (immersion!).

2) I don't know much about rock musicians, but my research led me to a biography of Tom Scholz, from the group Boston. He started playing musical instruments at 21.

3) Here's a link to someone else's list:
http://creativejourneycafe.com/2008/04/09/10-creative-late-bloomers/

4) Here's another list, which is part of a review of a book called Defying Gravity: A Celebration of Late-Blooming Women:
http://www.bookpleasures.com/Lore2/idx/28/2190/Womens_Issues/article/Defying_Gravity.html

5) Another link with good examples:
http://en.wikipedia.org/wiki/Late_bloomer
(I'm the one who added Joseph Conrad to this list).

...I invite other suggestions to expand my list!

Friday, September 11, 2009

Making it through a difficult day or night

It can be hard to make it through the next hour, if you are feeling desperately unhappy, agitated, empty, worthless, or isolated, especially if you also feel disconnected from love, meaning, community, "belongingness," or relationships with others.

Such desperate places of mind can yet be familiar places, and a certain set of coping tactics may evolve. Sometimes social isolation or sleep can help the time pass; other times there can be addictive or compulsive behaviours of different sorts. These tactics may either be distractions from pain or distress, or may serve to anesthetize the symptoms in some way, to help the time pass.

Time can become an oppressive force to be battled continuously, one minute after the next.

I'd like to work on a set of ideas to help with situations like this. I realize a lot of these ideas may be things that are already very familiar, or that may seem trite or irrelevant. Maybe things that are much easier said than done. But I'd like to just sort of brainstorm here for a moment:

1) One of the most important things, I think, is to be able to hold onto something positive or good (large or small), in your mind, to focus on it, to rehearse it, to nurture its mental image, even if that good thing is not immediately present. The "good thing" could be anything -- a friend or loved one, a song, a place, a memory, a sensation, a dream, a goal, an idea. In the darkest of moments we are swept into the immediacy of suffering, and may lose touch with the internalized anchors which might help us to hold on, or to help us direct our behaviour safely through the next 24 hours.

In order to practice "holding on" I guess one would have to get over the skepticism many would have that such a tactic could actually help.

In order to address that, I would say that "covert imagery" is a well-established technique, with an evidence base in such areas as the treatment of phobias, learning new physical activities, practicing skills, even athletic training (imagining doing reps will actually strengthen muscles). The pianist Glenn Gould used covert imagery to practice the piano, and preferred to do much of his practice and rehearsal away from any keyboard; he preferred to learn new pieces entirely away from the piano. There is nothing mystical about the technique -- it is just a different way of exercising your brain, and therefore your body (which is an extension of your brain).

In order for covert imagery to work, it really does help to believe in it though (skepticism is highly demotivating).

Relationships can be "covertly imagined" as well -- and I think this is a great insight from the psychoanalysts. An internalized positive relationship can stay with us, consciously or unconsciously, even when we are physically alone. If you have not had many positive relationships, or your relationships have not been trustworthy, safe, or stable, then you may not have a positive internalized relationship to comfort you when you are in distress. You may feel comforted in the moment, if the situation is right, but when alone, you may be right back to a state of loneliness or torment.

The more trust and closeness that develops in your relationship life, the easier it will be to self-soothe, as you "internalize" these relationships.

Here are some ways to develop these ideas in practical ways:

-journaling, not just about distress, but about any healthy relationship or force in your life which helps soothe you and comfort you

-using healthy "transitional objects" which symbolize things which are soothing or comforting, without those things literally being present. These objects may serve to cue your memory, and help interrupt a cycle of depressive thinking or action.

-if there is a healthy, positive, or soothing relationship with someone in your life, imagine what that person might say to comfort or guide you in the present moment; and "save up" or "put aside" some of your immediate distress to discuss with that person when you next meet.

2) Healthy distraction.
e.g. music (listening or performing); reading (silently or aloud, or being read to); exercise (in healthy moderation); hobbies (e.g. crafts, knitting, art); baking
-consider starting a new hobby (e.g. photography)

3) Planning healthy structured activities
e.g. with community centres, organized hikes, volunteering, deliberately and consciously phoning friends

4) Creating healthy comforts
e.g. hot baths, aromatherapy, getting a massage, preparing or going out for a nice meal

5) Recognizing and blocking addictive behaviours
-there may be a lot of ambivalence about this, as the addictive behaviours may have a powerful or important role in your life; but freeing oneself from an addiction, or from recurrent harmful behaviour patterns, can be one of the most satisfying and liberating of therapeutic life changes.
An addictive process often "convinces" one that its presence is necessary and helpful, and that its absence would cause even worse distress.

6) Humour
-can anyone or anything make you laugh?
-can you make someone laugh?

7) Meditation
-takes a lot of practice, but can be a powerful tool for dealing safely with extreme pain
-could start with a few Kabat-Zinn books & tapes, or consider taking a class or seminar (might need to be patient to find a variety of meditation which suits you)

8) Being with animals (dogs, cats, horses, etc.). If you don't or can't have a pet, then volunteering with animals (e.g. at the SPCA) could be an option.

9) Caring for other living things (e.g. pets, plants, gardens)

10) Arranging for someone else to take care of you for a while (e.g. by friends, family, or in hospital if necessary)

11) Visiting psychiatry blogs
-(in moderation)


...I'm just writing this on the spur of the moment, I'll have to do some editing later, feel free to comment...

Tuesday, September 8, 2009

When your therapist makes a mistake

Sometimes your therapist will make a mistake:
- an insensitive or clumsy comment
- an intrusive line of questioning
- a failure to notice, attend to, or take seriously, something important in the session
- unwelcome or way-off-base advice.

If such problems are recurrent and severe, it may be a sign that you don't have a very good therapist, and that it is important to seek a referral to someone else.

Some problems could be forms of malpractice (e.g. being given dangerous medications inappropriately), and could be pursued through legal channels.

I think that a healthy therapy frame is one in which the therapist will be open to discussing any problems or mistakes.

The therapist should sincerely apologize for all mistakes, and be open to making a plan to prevent similar mistakes from happening again.

You deserve to feel safe, respected and cared for in therapy.

There are other types of conflicts that can arise in therapy, when one person or the other feels hurt, frustrated, or misunderstood. I can think of situations over the past ten years in which there have been tense conflicts, and in which my patient chose not to continue seeing me. In some of these cases, I have felt that there was a conflict--a problem in the relationship--which needed to be resolved. Sometimes these conflicts were made more likely by my own character style or behavioral quirks; other times I think these conflicts were at least partly "transferential," in that my actions triggered memories associated with conflicts from previous relationships (such as with parents growing up). In a few cases, I think the conflict was influenced by active mood symptoms (e.g. severe irritability). I think many conflicts have a mixture of different causes, and are not necessarily caused by just one thing.

In any case, I do strongly believe that resolving conflict in therapy is very important. And I believe a therapist must gently and empathically invite a dialog about conflicts, in a manner which is open, non-defensive, and "non-pushy." Such a moment of conflict-resolution, if it occurs, could be one of the most valuable parts of a therapy experience, a source of peace and freedom.

Monday, August 31, 2009

Language Learning Metaphor


I have often compared psychological change to language learning.

This could be appreciated on a metaphorical level, but I think that neurologically the processes are similar.

Many people approach psychological change as they would approach something like learning Spanish. Reasons for learning Spanish could be very practical (e.g. benefits at work, moving to a Spanish-speaking country, etc.), or could be more whimsical or esthetic (e.g. always enjoying Spanish music or movies). There is a curiosity and desire to learn and change, and steps are taken to begin changing. A Spanish language book would be acquired. An initial vigorous burst of energy would be spent learning some Spanish vocabulary.

This process often might last a few weeks or months. There might be a familiarity with certain phrases, an intellectual appreciation of the grammatical structure, and perhaps the ability to ask for something in a coffee shop.

Then the Spanish book would sit on the shelf, and never be opened again.

Another pathway could be like the French classes I remember during elementary school. We must have had some French lessons every week for eight years. I did well academically, and had high grades in French.

But I never learned to speak French.

And most people don't learn to speak Spanish either, despite their acquisition of instructional books.

So, there is a problem here: motivation exists to change or learn something new. There is a reasonable plan for change. Effort is invested into changing. But change doesn't really happen. Or the change only happens in a very superficial way.

Here is what I think is required to really learn a language:

1) Immersion is the optimal process. That is, you have to use only the new language, constantly, for weeks, months, or years at a time. This constrains one's mind to function in the new language. Without such a constraint, the mind shifts back automatically to the old language most of the time, and the process of change is much slower, or doesn't happen at all.
2) Even without immersion, there must be daily participation in the learning task, for long periods of time.
3) The process must include active participation. It is helpful to listen quietly, to read, to understand grammar intellectually -- but the most powerful acts of language learning require you to participate actively in conversation using the new language.
4) Perhaps 1000 hours of active practice are required for fluency. 100 hours of practice will help you to get by on a very basic level. 6-10 hours of work per week is a reasonable minimum.
5) Along the way, you have to be willing to function at what you believe is an infantile level of communication, and stumble through, making lots of mistakes, possibly being willing to embarrass yourself. It will feel awkward and slow at first.
6) It is probably necessary to have fellow speakers of the new language around you, to converse with during your "immersion" experience.
7) Part of the good news is that once you get started, even with a few hours' practice, there will be others around you to help you along enthusiastically.

I think that psychological change requires a similar approach. The brain is likely to change in a similar way. I am reminded of Taub's descriptions of constraint-induced rehabilitation from strokes: recovery of function, and neuroplastic brain change, can take place much more effectively if the person is in a state of physiologic "immersion."

Many people acquire books about psychological change (e.g. self-help books, CBT manuals, etc.) in the same way one might acquire a book about learning Spanish. People might read them through, learn a few things, then the books would sit unopened for the next five years.

Or many people might participate in psychotherapy similar to a weekly language lesson: it might be familiar, educational--if there was an exam to write, people might get high grades--but often the "new language" fluency never really develops.

So I encourage the idea of finding ways to create an "immersion" experience, with respect to psychological change. This requires daily work, preferably in an environment where you can set the "old language" aside completely. This work may feel artificial, slow, contrived, or superficial. But this is just like practicing phrases in a new language for the first time. Eventually, the work will feel more natural, spontaneous, and easy.

I think the greatest strength of cognitive-behavioural therapy is its emphasis on "homework," which calls upon people to focus every day on constructive psychological change. And the different columns of a CBT-style homework page remind me of the "columns" one might use to translate phrases from one language into another. In both cases, in order for this homework to work, it has to be practiced, not just on paper, but spoken out loud, or spoken inside your mind, with sincerity and repetition, and preferably also with other people in dialogs.

There's some interesting academic work out there on language acquisition--but for a start, here's a reference from a language-learning website (particularly the summary on the bottom half of this webpage):
http://www.200words-a-day.com/language-learning-reviews.html

Monday, August 17, 2009

ADHD questions

Here are some great questions about ADHD, submitted by a reader:

1) You write here that long-term use of stimulants has NOT been shown to improve long-term academic outcomes. Why do you think this is, given that symptoms of ADHD improve on medication? (It actually really depresses me to think that individual symptoms can improve, yet no real change takes place...though I know that this might not apply to all patients.

2) What are some effective non-drug treatments for ADHD? I am particularly interested in dietary measures, and also EEG biofeedback.

3) I have read about prescribing psychostimulants as a way of basically diagnosing ADHD...i.e., the diagnosis is based on your response to the medication. I am just wondering how precise this would be, given that stimulants would probably (?) impove most people's concentration, etc. Or is there any role for neuropsychological testing in trying to establish a diagnosis? Is there any way of definitively establishing this kind of diagnosis?

4) I have read that there are many differences between ADD and ADHD, i.e. not just in symptom presentation but in the underlying brain pathology. Is that true? I'm not sure how to phrase it, it seemed like the suggestion was that ADD was more "organic", although maybe that doesn't make sense. Does that have implications for prognosis or treatment strategies?

5) I have read that one red flag that suggests ADD in the context of MDD treatment is a good response to bupropion. If a patient did not have a really good response to bupropion-- or if the response was only partial-- does this usually mean that treatments with psychostimulants like Ritalin, Adderall, etc. will be ineffective (or only partially effective) also?

6) If ADD is not diagnosed/treated until adulthood, is it usually more difficult to treat than if it is diagnosed/ treated in early childhood? Is the response to stimulant treatment just as good? I guess I am wondering if there are certain structural changes that occur in the brain that result from untreated ADD-- kind of like long-term depression and hippocampal atrophy?

7) Is there a certain type of patient who usually does poorly on psychostimulants, or who experiences severe side effects on psychostimulants?



I don't know the answers to a lot of these, but I am interested to keep trying to learn more. Here's my best response I can come up with for now:

1) First of all, the bottom line of whether something is helpful or not may not be some specific thing, like academic performance. Perhaps "well-being" in a broad, general sense is a more reasonable goal. Yet, things like academic performance are important in life. Perhaps stimulants or other treatments for ADHD are "necessary but not sufficient" to help with ADHD-related academic problems over the longer term. It appears to me from the data that stimulants are actually helpful for academic problems, it's just that the size of the effect is much smaller than what most people would hope for.

2) I wrote a post about zinc supplementation before. Also adequate iron stores are probably important. A generally healthy diet is probably important. I've encountered some people with ADHD who have reduced tolerance for irritation or frustration, and may be particularly bothered or distracted by hunger; yet they may not be organized to have meals prepared regularly through the day. So it can help them manage their ADHD to make sure they always have snacks with them, so that they are never in a hungry state. Other than that, I think there are a lot of nutritional claims out there which have a poor evidence base. The link between sugar intake and hyperactivity is poorly substantiated--I've written a post about that.

Food additives or dyes could play a role in exacerbating ADHD symptoms. Based on this evidence, it makes sense to me to limit food dyes and sodium benzoate in the diet, since such changes do not compromise quality of life in any way, and may lead to improved symptoms. Here are a few references:

http://www.ncbi.nlm.nih.gov/pubmed/17825405
(this is the best of the references: it is from Lancet in 2007)

http://www.ncbi.nlm.nih.gov/pubmed/15613992
http://www.ncbi.nlm.nih.gov/pubmed/15155391

I once attended a presentation on EEG biofeedback. I think it is a promising modality. Harmless to give it a try, but probably expensive. It will be interesting once the technology is available to use EEG biofeedback in front of your own home computer, at low cost.

A few of the self-help books about ADHD are worth reading. There are a lot of practical suggestions about managing symptoms. Some of the books may contain a strongly biased agenda for or against things like stimulants or dietary changes, so you need to be prepared for that possibility.

3)The ADHD label is an artificial, semantic creation, a representation of symptoms or traits which exist on a continuum. Even for those who do not officially satisfy symptom checklist criteria for ADHD, they could benefit substantially from ADHD treatments if there is some component of these symptoms at play neurologically. Many people with apparent disorders of mood, personality, learning, conduct, etc. may have some component of ADHD as well: in some cases ADHD treatments are remarkably helpful for the other problems. So I think careful trials of stimulants could be helpful diagnostically for some people, provided there are no significant contraindications.

4) I've always thought about the ADHD label as just a semantic updating of the previous ADD label. Subtypes of ADHD which are predominantly inattentive rather than hyperactive may differ in terms of comorbidities and prognosis.

5) Hard to say. Many people think of bupropion as a "dopaminergic" drug, whereas bupropion and its relevant metabolites probably act mainly on the norepinephrine system in humans (its dopaminergic activity is more significant in dogs). But perhaps bupropion response could correlate with stimulant response. I haven't seen a good study to show this, nor do I have a case series myself to comment one way or the other based on personal experience.

6) I don't know about that. Comorbidities (e.g. substance use, relationship, or conduct problems) may have accumulated in adults who have not had help during childhood. Yet I have often found it to be the case that the core symptoms of most anything can improve with treatment, at any age.

7) Patients with psychotic disorders (i.e. having a history of hallucinations, delusions, or severely disorganized thinking) often seem to do poorly on stimulants. Patients who are using stimulants primarily to increase energy or motivation often are disappointed with stimulants after a few months, since tolerance develops for effects on energy. Patients with eating disorders could do poorly, since stimulant use may become yet another dysfunctional eating behaviour used to control appetite. And individuals who are trying to use stimulants as part of thrill-seeking behaviour, who are using more than prescribed doses, or who are selling their medication, are worse off for receiving stimulant prescriptions.

Wednesday, July 29, 2009

Twin Studies & Behavioral Genetics

The field of behavioral genetics is of great interest to me.

A lot of very good research has been done in this area for over 50 years.

One of the strongest methods of research in behavioral genetics is the "twin study", in which pairs of identical twins are compared with pairs of non-identical twins, looking at symptoms, traits, behaviors, disease frequencies, etc.

I would like to explore this subject in much greater detail in the future, but my very brief summary of the data is this:
1) most human traits, behaviors, and disease frequencies are strongly affected by hereditary (genetic) factors. Typically, about 50% of the variability in these measures is caused by variability of inherited genes. That is, the "heritability" is typically 50%, sometimes much higher.
2) The remaining variability is mostly due to so-called "non-shared environmental factors". This fact is jarring to those of us who have believed that the character of one's family home (a "shared environmental variable") is a major determinant of future character traits, etc.
3) Hereditary factors tend to become more prominent, rather than less prominent, with advancing age. One might have thought that, as one grows older, environmental events would play an ever-increasing role in "sculpting" our personalities or other traits. This is not the case.
4) Some of the "environmental variation" may in fact be random. Basically, good or bad luck. Getting struck by lightning, or winning the lottery, or not. Such "luck-based" events are mostly (though not entirely) outside our control.
5) All of these facts may lead to a kind of fatalism, a resignation about our traits being determined by factors outside our control. (mind, you, being "lucky" or "unlucky" may be more determined by attitudinal factors such as openness than just by random events: see the following article--http://www.scientificamerican.com/article.cfm?id=as-luck-would-have-it)


Here is some of my critical response to the above statements:

1) Statements about heritability are in fact dependent upon the average environmental conditions experienced by the population being studied. For example, if we were to measure the heritability of becoming the leader of a large country, we would find heritabilities of nearly 100% in times or places where there are hereditary monarchies, and much lower heritabilities for democracies (mind you, the case of the Bush family shows that the heritability has been non-zero in the U.S.).
2) Non-shared environmental factors are extremely important. This does not mean that the family environment is unimportant. Part of an individual's non-shared environmental experience is that person's unique experience of the family environment. The lesson in this is that families need to pay close attention to how each individual family member is adapting to the family situation, and to also pay close attention to a child's peer and school environment.
3) The influence of shared environmental factors is small, but rarely zero. Usually there is some small percentage of variability accounted for by shared factors. Often this percentage is larger in childhood, and declines towards zero during adult maturation. But it is not zero. Just because an influence is small does not mean that it is unimportant. We have limited control over our genetics, after all, but we do have more substantial control over shared and non-shared environmental variables.
4) Most studies look at the general effect of genetic & environmental factors in populations. Compelling examples are frequently cited of individual twins, separated at birth: perhaps one twin is adopted into a wealthy, privileged home with access to multiple educational resources, while the other grows up in a more impoverished setting. The story typically is that the twins both end up with similar funds of knowledge or intelligence: the first twin reads books available at home, while the other twin develops her inherited interest in knowledge by going out of her way to acquire a library card, and spending all day reading at the local library. Such case examples illustrate how inherited factors can prevail despite environmental differences.

But I'm interested to see counterexamples: examples in which differences in environment between twins did lead to substantial differences in traits later on. It is this type of example that has the most practical value, in my opinion.

5) I have considered the following idea:
For any trait or characteristic having any heritability, there may be environmental variables that can change the outcome of the trait for a given individual. Even for highly, obviously heritable traits. Consider eye color, for example. This seems obviously purely genetic. But suppose there was a medication that could change eye color. This would be a purely environmental factor (though, of course, perhaps the tendency to use a drug to change eye color would be partially inherited). Most people would not use such a drug. Measures of heritability for eye color would remain very high. But, despite this high heritability, there may well be simple, direct environmental changes which, for a given individual, could completely change the trait. Such environmental changes would have to be very different from average environmental conditions. The higher the heritability, the farther would the environmental change have to be from average, in order to effect a change in the trait.

We could say that the tendency to kill and devour wildebeest is heritable, among the different wild creatures of the African savanna. The genetic differences between lions and giraffes would completely determine the likelihood of such creatures devouring a wildebeest or not. We could say that lions inherit a tendency to eat wildebeest, while giraffes do not. Yet, I suppose that it is true that we could train and/or medicate lions (and also keep them well-fed with a vegetarian diet!) so that wildebeest are totally safe around them. In this way, we would be introducing a set of environmental changes which would cause a radical change in lion behavior. This does not change the fact that the heritability for lions' killing wildebeest is extremely high, it just means that the environmental change necessary to change the trait must have to be something radically different from the environmental experience of the average lion (most lions are not trained to be non-predatory!).


The clinical applications I have based on these observations are the following:

1) Many psychological phenomena are highly heritable. This does not mean that these phenomena are unchangeable though. It does mean that, in order to change the trait or behavior, an environmental change needs to occur which is substantially different from the environmental experiences of most people, or of the "average person". This may help us to use our efforts most efficiently. So, for example, it would be inefficient to merely provide everybody with a typical, average, 2-parent family living in a bungalo. The evidence shows that such "average" environmental changes have minimal impact on psychological or behavioral traits. It would be important to make sure each individual is not deprived or harmed, and has access to those basic environmental elements that are required for them to realize their potential. If there are problems, then the means of addressing those problems may require a substantial, unique, or radical environmental change.
2) The most influential environmental variables are those which are unique to the individual, not the ones which are shared in a family. This does not mean that family experiences are unimportant, but that a child's unique experience of his or her own family environment, is much more important than the overall atmosphere of the home. A chaotic household may be a pleasure, a source of boisterous social stimulation, for one child, but an injurious, disruptive, irritating source of stress for another. A calm household may allow one child to grow and develop, while it may cause another child to become bored or restless.
3) The higher the heritability, the more pronounced the environmental (or therapeutic) change is required to change the trait, compared to the average environment in the population.
4) The motivation to have a certain style of home, or parenting, etc. should logically not primarily be to "sculpt" the personality of your child, but to allow for joyous long-term memories, to be shared and recounted as stories by parent and child, and to pay attention to the unique nature of each individual child, providing for any healthy needs along the way.


Some references:

Segal, Nancy L. (2000). Entwined Lives: Twins and what they tell us about human behavior. New York: Plume.

http://www.ncbi.nlm.nih.gov/pubmed/19378334
{a 2009 review including a look at "epigenetics", the notion that one's genes are changeable, therefore identical twins are not truly "identical" in a genetic sense}

http://www.ncbi.nlm.nih.gov/pubmed/18412098
{genetics of PTSD}

http://www.ncbi.nlm.nih.gov/pubmed/17176502
{a look at how genetic factors influence environmental experience}

http://www.ncbi.nlm.nih.gov/pubmed/17679640
{a look at how choice of peers is influenced by heredity, moreso as a child grows up}

http://www.ncbi.nlm.nih.gov/pubmed/18391130
{some of the research showing different genetic influences coming "on line" during different stages of childhood and young adult development}

http://www.ncbi.nlm.nih.gov/pubmed/19634053
{a recent article by TJ Bouchard, one of the world's leading experts in twin studies}