Monday, February 13, 2012

Statistics in Psychiatry & Medicine

This is a continuation of my thoughts about this subject. 

Statistical analysis is extremely important to understand cause & effect!  A very strong factor in this issue has to do with the way the human mind interprets data;  Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow.  I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions.    And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side.   Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions.  The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.    

I do believe that a formal statistical treatment of data is much more common now in published research.  But I am now going to argue for something that seems entirely contradictory to what I've just said above!  I'll proceed by way of a fictitious example:

Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30.  And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc.  Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment.  This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo. 

This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful. 

But suppose the result of the study is negative.  What could this mean?  Most of us would conclude that the active treatment is therefore not useful.   --But I believe this is an incorrect conclusion!-- 

Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc.  And suppose the treatment offered was neurosurgery or placebo.  I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache. 

Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma.  These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!) 

Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies.  For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.

With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon:  negative studies would influence practitioners to abandon the treatment strategy for the whole group.  

How can this be remedied?  I think the simplest method would be trivial:  all research studies should include in the publication every single piece of data gathered!  If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual.  There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed!  If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart.  But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart.  Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data.  So intuition could still have a very important role to play in statistics & research! 

After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group.  (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).

I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization.   Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies. 

In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster.  There might possibly be individuals who would respond well to such a treatment.  We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.  

Another complaint I have about the statistics & research culture has to do with the term "significant."  I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant."  Often times, a p value less than 0.05 is considered "significant".  The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not.  I believe that studies should simply report the p-value, and not call the results "significant" or not.  This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant."   There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result.  Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision.   Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists. 

Tuesday, February 7, 2012

How long does it take for psychotherapy to work?

There are various research articles done in the past which describe rates of change in psychotherapy patients, some studies for example describing a plateau after about 25 sessions or so.  I find these studies very weak, because of the multitude of confounding factors:  severity and chronicity are obvious variables, also the type of follow-up assessments done.

In the CBT literature, a typical trial of therapy is perhaps 16-20 sessions.

In light of our evolving knowledge of neuroplasticity, and our breadth of understanding about education & learning, it seems to me that the most important variable of all is the amount of focused, deliberate practice time spent in a therapeutic activity.  Oddly, most psychotherapy studies--even CBT studies--do not look at how many hours of practice patients have done in-between therapy appointments.  This would be like looking at the progress of music students based on how many lessons they get, without taking into account how much they practice during the week. 

I have often compared psychological symptom change to the changes which occur, for example, with language learning or with learning a musical instrument.

So, I believe that a reasonable estimate of the amount of time required in psychotherapy depends on what one is trying to accomplish:

-Some types of therapeutic problems might be resolved with a few hours of work, or with a single feedback session with a therapist.  This would be akin to a musician with some kind of technical problem who needs just some clear instruction about a few techniques or exercises to practice.  Or it might be akin to a person who is already fluent in a foreign language, but needs a few tips from a local speaker about idioms, or perhaps some help with editing or grammar in a written text.

-Many more therapeutic problems could improve with perhaps 100 hours of work.  This would be like learning to swim or skate competently if you have never done these activities before.  Regular lessons ("therapy") would most likely speed up your rate of progress substantially.   But most of those 100 hours would be practice on your own, unless you're okay with the progress taking place over a year or more.   With the language analogy, think of how fluent you might become in a foreign language with 100 hours of focused, deliberate practice.  For most of us, this would lead to an ability to have a very simple conversational exchange, perhaps to get around in the most basic way in another country.  

-A much larger change is possible with 1000 hours of work:  with music, one could become quite fluent but probably not an expert.  With a foreign language, comfortable fluency would probably be possible, though probably still with an accent and a preference for the old language.
 
-With 5000-10000 hours of work (this is several hours per day over a decade or more) one could become an expert at a skill or a language in most cases.  

In psychotherapy, another confound though is whether the times in-between "practice sessions" lead to a regression of learning.  An educational analogy would be of practicing math exercises an hour per day with a good teacher, but then practicing another 8 hours a day with another teacher whose methods contradict the first.  Often times, learning will still take place with this paradigm, but it might be much less efficient.    Persistent mental habits, in the context of mental illnesses, can be akin to the "second teacher" in this metaphor, and unfortunately they do tend to plague people for many hours per day.

This reminds me of the evolving evidence about stroke rehabilitation & neuroplasticity:  substantial brain change can happen in as short a time as 16 days--but it requires very strict inhibition or constraint of the pathways which obstruct rehabilitation. (note: 16 days of continuous "immersion" = 16*24 = 384 hours!)  In stroke rehabilitation, the neuroplasticity effect is much more pronounced if the unaffected limb is restrained, compelling the brain to optimize improvement in function of the afflicted limb.  Here is a recent reference showing rapid brain changes following limb immobilization: http://www.ncbi.nlm.nih.gov/pubmed/22249495

In conclusion, I believe that it is important to have a clear idea about how much time and deliberate, focused effort are needed to change psychological symptoms or problems through therapeutic activities.  A little bit of meaningful change could happen with just a few hours of work.  In most cases, 100 hours is needed simply to get started with a new skill.  1000 hours is needed to become fluent.  And 5000-10000 hours is needed to master something.  These times would be much longer still if the periods between practice sessions are regressive.  In the case of addictions, eating disorders, self-harm,  or OCD, for example, relapses or even fantasies about relapse will substantially prolong the time it takes for any therapeutic effort to help.  Of course, it is the nature of these problems to have relapses, or fantasies about relapse--so one should let go of the temptation to feel guilty if there are relapses.   But if one is struggling with an addictive problem of this sort, it may help to remind oneself that the brain can change very substantially if one can hold onto to quite a strict behavioural pattern for the hundreds or thousands of hours which are needed.

As a visual reminder of this process, start with an empty transparent bottle, which can hold 250-500 mLof liquid (1-2 cups), and which can be tightly sealed with a small cap.  Add one drop of water every time you invest one hour of focused, deliberate therapeutic work.   The amount of time you need to spend in therapy depends on your goal.  If the goal is total mastery--then you must fill the entire bottle.  If simple competence in a new skill is an adequate goal, then you must fill just the cap of the bottle.  If there are activities in your day which contradict the therapeutic work, it would be like a little bit of water leaking out of your bottle.  So you must also attend to repairing any "leaks."  But every hour of your effort counts towards your growth.

Monday, February 6, 2012

Scopolamine for Depression

Scopolamine is an acetylcholine-receptor blocker, which is usually used to treat or prevent motion sickness. Some recent studies show that it might be useful to treat depression.  Here is some background, followed by a few references to research studies:  

The old tricyclic antidepressants (such as amitriptyline) were shown over many years to work very well for many people.  Unfortunately, they are laden with side-effect problems and a significant toxicity risk (they can be lethal in overdose).  The side effects are due to various different pharmacologic effects, particularly the blockade of acetylcholine and histamine receptors.  Newer antidepressants, such as those in the SSRI group, have very few such receptor blockade effects.

In some studies, however, the old tricyclics actually are superior to newer antidepressants, especially for severely ill hospitalized depression patients.

It is interesting to consider whether some of the receptor blockade effects which were previously considered just nuisances or side-effect problems, could actually be part of the antidepressant activity.  Or, in some cases, drugs which primarily have receptor blockade side effects may actually be indirectly modulating various other neurotransmitter systems.

A clear precedent exists in this regard:  clozapine is undoubtedly the most effective antipsychotic, but it is loaded with multiple side effects and receptor blockades.  It may be --at least in part-- because of the receptor blockades, not in spite of them, that it works so well.  

Another example of this effect, quite possibly, is related to what I call the "active placebo" literature (I have referred to it elsewhere on this blog: http://garthkroeker.blogspot.com/2009/03/active-placebos.html)  The active placebos used in these studies usually had side effects  due to acetylcholine blockade, and the active placebo groups usually improved quite a bit more than those with inert placebos.  This suggests another interpretation of the "active placebo" effect:  perhaps it is not simply the existence of side-effects that psychologically boosts a placebo effect here, it is that the side-effects themselves are due to a pharmacologic action that is actually of direct relevance to the treatment of depression.

Here are some studies looking at  scopolamine infusions to treat depression:

http://www.ncbi.nlm.nih.gov/pubmed/17015814
This 2006 study from Archives of General Psychiatry showed that 4 mcg/kg IV infusions of scopolamine  (given in 3 doses, every 3-5 days) led to a rapid reduction in depression symptoms (halving of the MADRS score), with a pronounced difference from placebo.   Of particular  note is that the cohort consisted mainly of chronically depressed patients with comorbidities and unsuccessful trials of other treatments.  Surprisingly, there were few side effect problems, aside from a higher rate of the expected anticholinergic-induced dry mouth and dizziness. 

 http://www.ncbi.nlm.nih.gov/pubmed/20074703
This is a replication of the study mentioned above, published in Biological Psychiatry in 2010. 

 http://www.ncbi.nlm.nih.gov/pubmed/20736989
Another similar study, this time showing a greater effect in women; again a 4 mcg/kg infusion protocol was used. 

http://www.ncbi.nlm.nih.gov/pubmed/20926947
evidence from an animal study that scopolamine --or acetylcholine blockade in general-- affects NMDA-related activity, in general antagonizing the effects of NMDA.   This is consistent with a theory that scopolamine may work in a similar manner to the NMDA-blocker ketamine (which has been associated with rapid improvement in depression symptoms) but without nearly as much risk of dangerous medical or neuropsychiatric side-effects.

http://www.ncbi.nlm.nih.gov/pubmed/21306419
This article looks at the pharmacokinetics of infused scopolamine, and also gives a detailed account of side-effects.  There are notable cognitive side-effects, such as reduced efficiency of short-term memory.

http://www.ncbi.nlm.nih.gov/pubmed/16719539
This study looks at dosing scopolamine as a patch.  The patch is designed to give a rapidly absorbed loading dose, then a gradual release to maintain a fairly constant level over 3 days.  My own estimation, based on reviewing this information, is that a scopolamine patch would roughly approximate the IV doses used in the depression treatment studies described above, though of course the serum levels would be more constant.

Transdermal scopolamine (patches) are available in Canada from pharmacists without a physician's prescription.

While this is an interesting--though far from proven-- treatment idea, it is very important to be aware of anticholinergic side effects, which at times could be physically and psychologically unpleasant.  At worst, cognitive impairment or delirium could occur as a result of excessive cholinergic blockade.  Therefore, any attempt to treat psychiatric symptoms using anticholinergics should be undertaken with close collaboration with a psychiatrist.