Tuesday, September 8, 2009

When your therapist makes a mistake

Sometimes your therapist will make a mistake:
- an insensitive or clumsy comment
- an intrusive line of questioning
- a failure to notice, attend to, or take seriously, something important in the session
- unwelcome or way-off-base advice.

If such problems are recurrent and severe, it may be a sign that you don't have a very good therapist, and that it is important to seek a referral to someone else.

Some problems could be forms of malpractice (e.g. being given dangerous medications inappropriately), and could be pursued through legal channels.

I think that a healthy therapy frame is one in which the therapist will be open to discussing any problems or mistakes.

The therapist should sincerely apologize for all mistakes, and be open to making a plan to prevent similar mistakes from happening again.

You deserve to feel safe, respected and cared for in therapy.

There are other types of conflicts that can arise in therapy, when one person or the other feels hurt, frustrated, or misunderstood. I can think of situations over the past ten years in which there have been tense conflicts, and in which my patient chose not to continue seeing me. In some of these cases, I have felt that there was a conflict--a problem in the relationship--which needed to be resolved. Sometimes these conflicts were made more likely by my own character style or behavioral quirks; other times I think these conflicts were at least partly "transferential," in that my actions triggered memories associated with conflicts from previous relationships (such as with parents growing up). In a few cases, I think the conflict was influenced by active mood symptoms (e.g. severe irritability). I think many conflicts have a mixture of different causes, and are not necessarily caused by just one thing.

In any case, I do strongly believe that resolving conflict in therapy is very important. And I believe a therapist must gently and empathically invite a dialog about conflicts, in a manner which is open, non-defensive, and "non-pushy." Such a moment of conflict-resolution, if it occurs, could be one of the most valuable parts of a therapy experience, a source of peace and freedom.

Monday, August 31, 2009

Language Learning Metaphor


I have often compared psychological change to language learning.

This could be appreciated on a metaphorical level, but I think that neurologically the processes are similar.

Many people approach psychological change as they would approach something like learning Spanish. Reasons for learning Spanish could be very practical (e.g. benefits at work, moving to a Spanish-speaking country, etc.), or could be more whimsical or esthetic (e.g. always enjoying Spanish music or movies). There is a curiosity and desire to learn and change, and steps are taken to begin changing. A Spanish language book would be acquired. An initial vigorous burst of energy would be spent learning some Spanish vocabulary.

This process often might last a few weeks or months. There might be a familiarity with certain phrases, an intellectual appreciation of the grammatical structure, and perhaps the ability to ask for something in a coffee shop.

Then the Spanish book would sit on the shelf, and never be opened again.

Another pathway could be like the French classes I remember during elementary school. We must have had some French lessons every week for eight years. I did well academically, and had high grades in French.

But I never learned to speak French.

And most people don't learn to speak Spanish either, despite their acquisition of instructional books.

So, there is a problem here: motivation exists to change or learn something new. There is a reasonable plan for change. Effort is invested into changing. But change doesn't really happen. Or the change only happens in a very superficial way.

Here is what I think is required to really learn a language:

1) Immersion is the optimal process. That is, you have to use only the new language, constantly, for weeks, months, or years at a time. This constrains one's mind to function in the new language. Without such a constraint, the mind shifts back automatically to the old language most of the time, and the process of change is much slower, or doesn't happen at all.
2) Even without immersion, there must be daily participation in the learning task, for long periods of time.
3) The process must include active participation. It is helpful to listen quietly, to read, to understand grammar intellectually -- but the most powerful acts of language learning require you to participate actively in conversation using the new language.
4) Perhaps 1000 hours of active practice are required for fluency. 100 hours of practice will help you to get by on a very basic level. 6-10 hours of work per week is a reasonable minimum.
5) Along the way, you have to be willing to function at what you believe is an infantile level of communication, and stumble through, making lots of mistakes, possibly being willing to embarrass yourself. It will feel awkward and slow at first.
6) It is probably necessary to have fellow speakers of the new language around you, to converse with during your "immersion" experience.
7) Part of the good news is that once you get started, even with a few hours' practice, there will be others around you to help you along enthusiastically.

I think that psychological change requires a similar approach. The brain is likely to change in a similar way. I am reminded of Taub's descriptions of constraint-induced rehabilitation from strokes: recovery of function, and neuroplastic brain change, can take place much more effectively if the person is in a state of physiologic "immersion."

Many people acquire books about psychological change (e.g. self-help books, CBT manuals, etc.) in the same way one might acquire a book about learning Spanish. People might read them through, learn a few things, then the books would sit unopened for the next five years.

Or many people might participate in psychotherapy similar to a weekly language lesson: it might be familiar, educational--if there was an exam to write, people might get high grades--but often the "new language" fluency never really develops.

So I encourage the idea of finding ways to create an "immersion" experience, with respect to psychological change. This requires daily work, preferably in an environment where you can set the "old language" aside completely. This work may feel artificial, slow, contrived, or superficial. But this is just like practicing phrases in a new language for the first time. Eventually, the work will feel more natural, spontaneous, and easy.

I think the greatest strength of cognitive-behavioural therapy is its emphasis on "homework," which calls upon people to focus every day on constructive psychological change. And the different columns of a CBT-style homework page remind me of the "columns" one might use to translate phrases from one language into another. In both cases, in order for this homework to work, it has to be practiced, not just on paper, but spoken out loud, or spoken inside your mind, with sincerity and repetition, and preferably also with other people in dialogs.

There's some interesting academic work out there on language acquisition--but for a start, here's a reference from a language-learning website (particularly the summary on the bottom half of this webpage):
http://www.200words-a-day.com/language-learning-reviews.html

Monday, August 17, 2009

ADHD questions

Here are some great questions about ADHD, submitted by a reader:

1) You write here that long-term use of stimulants has NOT been shown to improve long-term academic outcomes. Why do you think this is, given that symptoms of ADHD improve on medication? (It actually really depresses me to think that individual symptoms can improve, yet no real change takes place...though I know that this might not apply to all patients.

2) What are some effective non-drug treatments for ADHD? I am particularly interested in dietary measures, and also EEG biofeedback.

3) I have read about prescribing psychostimulants as a way of basically diagnosing ADHD...i.e., the diagnosis is based on your response to the medication. I am just wondering how precise this would be, given that stimulants would probably (?) impove most people's concentration, etc. Or is there any role for neuropsychological testing in trying to establish a diagnosis? Is there any way of definitively establishing this kind of diagnosis?

4) I have read that there are many differences between ADD and ADHD, i.e. not just in symptom presentation but in the underlying brain pathology. Is that true? I'm not sure how to phrase it, it seemed like the suggestion was that ADD was more "organic", although maybe that doesn't make sense. Does that have implications for prognosis or treatment strategies?

5) I have read that one red flag that suggests ADD in the context of MDD treatment is a good response to bupropion. If a patient did not have a really good response to bupropion-- or if the response was only partial-- does this usually mean that treatments with psychostimulants like Ritalin, Adderall, etc. will be ineffective (or only partially effective) also?

6) If ADD is not diagnosed/treated until adulthood, is it usually more difficult to treat than if it is diagnosed/ treated in early childhood? Is the response to stimulant treatment just as good? I guess I am wondering if there are certain structural changes that occur in the brain that result from untreated ADD-- kind of like long-term depression and hippocampal atrophy?

7) Is there a certain type of patient who usually does poorly on psychostimulants, or who experiences severe side effects on psychostimulants?



I don't know the answers to a lot of these, but I am interested to keep trying to learn more. Here's my best response I can come up with for now:

1) First of all, the bottom line of whether something is helpful or not may not be some specific thing, like academic performance. Perhaps "well-being" in a broad, general sense is a more reasonable goal. Yet, things like academic performance are important in life. Perhaps stimulants or other treatments for ADHD are "necessary but not sufficient" to help with ADHD-related academic problems over the longer term. It appears to me from the data that stimulants are actually helpful for academic problems, it's just that the size of the effect is much smaller than what most people would hope for.

2) I wrote a post about zinc supplementation before. Also adequate iron stores are probably important. A generally healthy diet is probably important. I've encountered some people with ADHD who have reduced tolerance for irritation or frustration, and may be particularly bothered or distracted by hunger; yet they may not be organized to have meals prepared regularly through the day. So it can help them manage their ADHD to make sure they always have snacks with them, so that they are never in a hungry state. Other than that, I think there are a lot of nutritional claims out there which have a poor evidence base. The link between sugar intake and hyperactivity is poorly substantiated--I've written a post about that.

Food additives or dyes could play a role in exacerbating ADHD symptoms. Based on this evidence, it makes sense to me to limit food dyes and sodium benzoate in the diet, since such changes do not compromise quality of life in any way, and may lead to improved symptoms. Here are a few references:

http://www.ncbi.nlm.nih.gov/pubmed/17825405
(this is the best of the references: it is from Lancet in 2007)

http://www.ncbi.nlm.nih.gov/pubmed/15613992
http://www.ncbi.nlm.nih.gov/pubmed/15155391

I once attended a presentation on EEG biofeedback. I think it is a promising modality. Harmless to give it a try, but probably expensive. It will be interesting once the technology is available to use EEG biofeedback in front of your own home computer, at low cost.

A few of the self-help books about ADHD are worth reading. There are a lot of practical suggestions about managing symptoms. Some of the books may contain a strongly biased agenda for or against things like stimulants or dietary changes, so you need to be prepared for that possibility.

3)The ADHD label is an artificial, semantic creation, a representation of symptoms or traits which exist on a continuum. Even for those who do not officially satisfy symptom checklist criteria for ADHD, they could benefit substantially from ADHD treatments if there is some component of these symptoms at play neurologically. Many people with apparent disorders of mood, personality, learning, conduct, etc. may have some component of ADHD as well: in some cases ADHD treatments are remarkably helpful for the other problems. So I think careful trials of stimulants could be helpful diagnostically for some people, provided there are no significant contraindications.

4) I've always thought about the ADHD label as just a semantic updating of the previous ADD label. Subtypes of ADHD which are predominantly inattentive rather than hyperactive may differ in terms of comorbidities and prognosis.

5) Hard to say. Many people think of bupropion as a "dopaminergic" drug, whereas bupropion and its relevant metabolites probably act mainly on the norepinephrine system in humans (its dopaminergic activity is more significant in dogs). But perhaps bupropion response could correlate with stimulant response. I haven't seen a good study to show this, nor do I have a case series myself to comment one way or the other based on personal experience.

6) I don't know about that. Comorbidities (e.g. substance use, relationship, or conduct problems) may have accumulated in adults who have not had help during childhood. Yet I have often found it to be the case that the core symptoms of most anything can improve with treatment, at any age.

7) Patients with psychotic disorders (i.e. having a history of hallucinations, delusions, or severely disorganized thinking) often seem to do poorly on stimulants. Patients who are using stimulants primarily to increase energy or motivation often are disappointed with stimulants after a few months, since tolerance develops for effects on energy. Patients with eating disorders could do poorly, since stimulant use may become yet another dysfunctional eating behaviour used to control appetite. And individuals who are trying to use stimulants as part of thrill-seeking behaviour, who are using more than prescribed doses, or who are selling their medication, are worse off for receiving stimulant prescriptions.

Wednesday, July 29, 2009

Twin Studies & Behavioral Genetics

The field of behavioral genetics is of great interest to me.

A lot of very good research has been done in this area for over 50 years.

One of the strongest methods of research in behavioral genetics is the "twin study", in which pairs of identical twins are compared with pairs of non-identical twins, looking at symptoms, traits, behaviors, disease frequencies, etc.

I would like to explore this subject in much greater detail in the future, but my very brief summary of the data is this:
1) most human traits, behaviors, and disease frequencies are strongly affected by hereditary (genetic) factors. Typically, about 50% of the variability in these measures is caused by variability of inherited genes. That is, the "heritability" is typically 50%, sometimes much higher.
2) The remaining variability is mostly due to so-called "non-shared environmental factors". This fact is jarring to those of us who have believed that the character of one's family home (a "shared environmental variable") is a major determinant of future character traits, etc.
3) Hereditary factors tend to become more prominent, rather than less prominent, with advancing age. One might have thought that, as one grows older, environmental events would play an ever-increasing role in "sculpting" our personalities or other traits. This is not the case.
4) Some of the "environmental variation" may in fact be random. Basically, good or bad luck. Getting struck by lightning, or winning the lottery, or not. Such "luck-based" events are mostly (though not entirely) outside our control.
5) All of these facts may lead to a kind of fatalism, a resignation about our traits being determined by factors outside our control. (mind, you, being "lucky" or "unlucky" may be more determined by attitudinal factors such as openness than just by random events: see the following article--http://www.scientificamerican.com/article.cfm?id=as-luck-would-have-it)


Here is some of my critical response to the above statements:

1) Statements about heritability are in fact dependent upon the average environmental conditions experienced by the population being studied. For example, if we were to measure the heritability of becoming the leader of a large country, we would find heritabilities of nearly 100% in times or places where there are hereditary monarchies, and much lower heritabilities for democracies (mind you, the case of the Bush family shows that the heritability has been non-zero in the U.S.).
2) Non-shared environmental factors are extremely important. This does not mean that the family environment is unimportant. Part of an individual's non-shared environmental experience is that person's unique experience of the family environment. The lesson in this is that families need to pay close attention to how each individual family member is adapting to the family situation, and to also pay close attention to a child's peer and school environment.
3) The influence of shared environmental factors is small, but rarely zero. Usually there is some small percentage of variability accounted for by shared factors. Often this percentage is larger in childhood, and declines towards zero during adult maturation. But it is not zero. Just because an influence is small does not mean that it is unimportant. We have limited control over our genetics, after all, but we do have more substantial control over shared and non-shared environmental variables.
4) Most studies look at the general effect of genetic & environmental factors in populations. Compelling examples are frequently cited of individual twins, separated at birth: perhaps one twin is adopted into a wealthy, privileged home with access to multiple educational resources, while the other grows up in a more impoverished setting. The story typically is that the twins both end up with similar funds of knowledge or intelligence: the first twin reads books available at home, while the other twin develops her inherited interest in knowledge by going out of her way to acquire a library card, and spending all day reading at the local library. Such case examples illustrate how inherited factors can prevail despite environmental differences.

But I'm interested to see counterexamples: examples in which differences in environment between twins did lead to substantial differences in traits later on. It is this type of example that has the most practical value, in my opinion.

5) I have considered the following idea:
For any trait or characteristic having any heritability, there may be environmental variables that can change the outcome of the trait for a given individual. Even for highly, obviously heritable traits. Consider eye color, for example. This seems obviously purely genetic. But suppose there was a medication that could change eye color. This would be a purely environmental factor (though, of course, perhaps the tendency to use a drug to change eye color would be partially inherited). Most people would not use such a drug. Measures of heritability for eye color would remain very high. But, despite this high heritability, there may well be simple, direct environmental changes which, for a given individual, could completely change the trait. Such environmental changes would have to be very different from average environmental conditions. The higher the heritability, the farther would the environmental change have to be from average, in order to effect a change in the trait.

We could say that the tendency to kill and devour wildebeest is heritable, among the different wild creatures of the African savanna. The genetic differences between lions and giraffes would completely determine the likelihood of such creatures devouring a wildebeest or not. We could say that lions inherit a tendency to eat wildebeest, while giraffes do not. Yet, I suppose that it is true that we could train and/or medicate lions (and also keep them well-fed with a vegetarian diet!) so that wildebeest are totally safe around them. In this way, we would be introducing a set of environmental changes which would cause a radical change in lion behavior. This does not change the fact that the heritability for lions' killing wildebeest is extremely high, it just means that the environmental change necessary to change the trait must have to be something radically different from the environmental experience of the average lion (most lions are not trained to be non-predatory!).


The clinical applications I have based on these observations are the following:

1) Many psychological phenomena are highly heritable. This does not mean that these phenomena are unchangeable though. It does mean that, in order to change the trait or behavior, an environmental change needs to occur which is substantially different from the environmental experiences of most people, or of the "average person". This may help us to use our efforts most efficiently. So, for example, it would be inefficient to merely provide everybody with a typical, average, 2-parent family living in a bungalo. The evidence shows that such "average" environmental changes have minimal impact on psychological or behavioral traits. It would be important to make sure each individual is not deprived or harmed, and has access to those basic environmental elements that are required for them to realize their potential. If there are problems, then the means of addressing those problems may require a substantial, unique, or radical environmental change.
2) The most influential environmental variables are those which are unique to the individual, not the ones which are shared in a family. This does not mean that family experiences are unimportant, but that a child's unique experience of his or her own family environment, is much more important than the overall atmosphere of the home. A chaotic household may be a pleasure, a source of boisterous social stimulation, for one child, but an injurious, disruptive, irritating source of stress for another. A calm household may allow one child to grow and develop, while it may cause another child to become bored or restless.
3) The higher the heritability, the more pronounced the environmental (or therapeutic) change is required to change the trait, compared to the average environment in the population.
4) The motivation to have a certain style of home, or parenting, etc. should logically not primarily be to "sculpt" the personality of your child, but to allow for joyous long-term memories, to be shared and recounted as stories by parent and child, and to pay attention to the unique nature of each individual child, providing for any healthy needs along the way.


Some references:

Segal, Nancy L. (2000). Entwined Lives: Twins and what they tell us about human behavior. New York: Plume.

http://www.ncbi.nlm.nih.gov/pubmed/19378334
{a 2009 review including a look at "epigenetics", the notion that one's genes are changeable, therefore identical twins are not truly "identical" in a genetic sense}

http://www.ncbi.nlm.nih.gov/pubmed/18412098
{genetics of PTSD}

http://www.ncbi.nlm.nih.gov/pubmed/17176502
{a look at how genetic factors influence environmental experience}

http://www.ncbi.nlm.nih.gov/pubmed/17679640
{a look at how choice of peers is influenced by heredity, moreso as a child grows up}

http://www.ncbi.nlm.nih.gov/pubmed/18391130
{some of the research showing different genetic influences coming "on line" during different stages of childhood and young adult development}

http://www.ncbi.nlm.nih.gov/pubmed/19634053
{a recent article by TJ Bouchard, one of the world's leading experts in twin studies}

Low-dose atypical antipsychotics for treating non-psychotic anxiety or mood symptoms

Atypical antipsychotics are frequently prescribed to treat symptoms of anxiety and depression. They can be used in the treatment of generalized anxiety, panic disorder, OCD, major depressive disorder, PTSD, bipolar disorder, personality disorders, etc. At this point, such use could be considered "off-label", since the primary use of antipsychotics is treating schizophrenia or major mood disorders with psychotic features.

But there is an expanding evidence base showing that atypicals can be useful in "off-label" situations. Here is a brief review of some of the studies:

http://www.ncbi.nlm.nih.gov/pubmed/19470174
{this is a good recent study comparing low-dose risperidone -- about 0.5 mg -- with paroxetine, for treating panic disorder over 8 weeks. The risperidone group did well, with equal or better symptom relief, also possibly faster onset. But 8 weeks is very brief -- it would be important to look at results over a year or more, and to assess the possibility of withdrawal or rebound symptoms if the medication is stopped. Also is would be important to determine if the medication is synergistic with psychological therapies, or whether it could undermine psychological therapy (there is some evidence that benzodiazepines may undermine the effectiveness of psychological therapies) }

http://www.ncbi.nlm.nih.gov/pubmed/16649823
{an open study from 2006 showing significant improvements in anxiety when low doses of risperidone, of about 1 mg, were added to an antidepressant, over an 8 week trial}

http://www.ncbi.nlm.nih.gov/pubmed/18455360
{this 2008 study shows significant improvement in generalized anxiety with 12 weeks of adjunctive quetiapine. It was not "low-dose" though -- the average dose was almost 400 mg per day. There is potential bias in this study due to conflict-of-interest, also there was no adjunctive placebo group}

http://www.ncbi.nlm.nih.gov/pubmed/16889446
{in this 2006 study. patients with a borderline personality diagnosis were given quetiapine 200-400 mg daily, for a 12 week trial. As I look at the results in the article itself, I see that the most substantial improvement was in anxiety symptoms, without much change in other symptom areas. The authors state that patients with prominent impulsive or aggressive symptoms responded best}

http://www.ncbi.nlm.nih.gov/pubmed/17110817
{in this large 2006 study (the BOLDER II study), quetiapine alone was used to treat bipolar depression. Doses were 300 mg/d, 600 mg/d, or placebo. There was significant, clinically relevant improvement in the quetiapine groups, with the 300 mg group doing best. Improvements were in anxiety symptoms, depressive symptoms, suicidal ideation, sleep, and overall quality of life.}

Here's a reference to a lengthy and detailed report from the FDA about quetiapine safety when used to treat depression or anxiety:
http://www.fda.gov/ohrms/dockets/ac/09/briefing/2009-4424b2-01-FDA.pdf


In summary, I support the use of atypical antipsychotics as adjuncts for treating various symptoms including anxiety, irritability, etc. But as with any treatment (or non-treatment), there needs to be a close review of benefits vs. risks. The risks of using antipsychotics for treating anxiety are probably underestimated, because the existing studies are of such short duration. Also the benefits over long-term use are not clearly established either.

For risk data, it would be relevant to look at groups who have taken antipsychotics for long periods of time. In this group, antipsychotic use is associated with reduced mortality rates (see the following 2009 reference from Lancet: http://www.ncbi.nlm.nih.gov/pubmed/19595447, which looks at a cohort of over 60 000 schizophrenic patients, showing reduced mortality rates in those who took antipsychotics long-term, compared to those taking shorter courses of antipsychotics, or none at all--the mortality rate was most dramatically reduced in those taking clozapine. Overall, the life expectancy of schizophrenic patients was shown to have increased over a 10-year period, alongside substantial increases in atypical antipsychotic use)

It is certainly clear to me that all other treatments for anxiety (especially behavioural therapies, lifestyle changes, other forms of psychotherapy) be optimized, in an individualized way, before medication adjuncts be used.

But I recognize that suffering from anxiety or other psychiatric symptoms can be severely debilitating, can delay or obstruct progress in relationships, work, school, quality of life, etc. The risks of non-treatment should be taken very seriously. My view of the existing evidence is that adjunctive low-dose antipsychotics can have significant benefits, which can outweigh risks for many patients with non-psychotic disorders. As with any medical treatment decision, it is important for you and your physician to regularly monitor or discuss risks vs. benefits of ongoing medication therapies, and be open to discuss new evidence which is coming out.