This is the first of a series of posts I've been planning based on Daniel Kahneman's book Thinking, Fast and Slow.
I found this book to be excellent, an account of how the brain is very biased in its mode of forming decisions and judgments, loaded with very abundant solid research over 40-50 years in the social and cognitive psychology literature.
My purpose of reflecting on this book in detail is hopefully to add ideas about understanding the brain's biases in the context of psychiatric symptoms, and then to propose therapeutic exercises which could counter or resolve the biases, and strengthen cognitive faculties which may intrinsically be weak.
----
The first few chapters of this book are introductions to the idea that the brain can be understood as having two main modes of processing and responding to information; the author calls these "system 1" and "system 2."
System 1 is rapid, automatic, reflexive, and often unconscious. It is the dominant system in most cases. It is the foundation of "intuition." It is built upon deeply engrained memory for similar situations. It is a foundation of all talent and mastery of skills, in that it permits one to perform a difficult task with ease, without even having to "think" about it (e.g. for a master musician, athlete, surgeon, or really any other occupation). But system 1 is extremely prone to biases. Its mode of processing data is based on what it has experienced repeatedly in the past -- so it is a kind of autopilot -- and it can be very easily fooled (yet, on the other hand, its rich set of past associations may be a fertile ground for imagination, creativity, and inspired insight).
System 2 is a highly conscious, intellectually analytical mode. It permits us to systematically solve a multi-step difficult problem of any sort. It permits us to cope with situations which differ from an overlearned template. It would be like the true pilot landing a plane in difficult or rapidly changing conditions, instead of letting the autopilot trying to land it.
One of Kahneman's main theses is that system 2 can be easily fooled too! While system 2 is the only cognitive mechanism which could prevent biased interpretation of information, Kahneman shows that system 2 is intrinsically "lazy." Because engaging system 2 is effortful -- it demands energy -- we are strongly drawn to intellectual processes which minimize the energy expenditure. If system 1 has an automatic, "intuitive" answer for us, then we would tend not to engage system 2 at all. And if a rapid engagement of system 2 appears to be sufficient to get an answer, we will usually not spend extra time or energy. Thus system 2 can easily lead us to a premature and inaccurate conclusion.
Another of Kahneman's main theses has to do with the nature of phenomena, cause-and-effect, and data in general. Accurate conclusions about cause and effect often require a type of statistical analysis (even a simple one, employing quite straightforward rules of probability), but Kahneman shows that the brain (both system 1 and system 2) are not intrinsically designed to think in a statistical fashion. Therefore we tend to greatly distort the likelihood of various types of events.
An area I would want to extend beyond Kahneman's main theses is that I suspect both system 1 and system 2 could be very specifically trained to reduce biases. Kahneman seems somewhat resigned to conclude that the brain simply can't resist the types of biases he describes (citing, for example, profoundly biased thinking in his psychology student subjects--or even in himself-- whose biases were evidently not reduced by understanding and education). But I do not see that very much work has been done to very specifically and intensively train the mind to reduce biases -- I think that simply learning about bias is not enough, it is something that must be practiced for hundreds of hours, just like any other skill. (this reminds me of something said in psychotherapy: "insight alone is not enough to effect change -- it must be accompanied by action.")
I believe this is relevant to psychiatry, in that all mental illnesses (such as depression, anxiety disorders, personality disorders, psychosis, and attention/learning disorders) contain symptoms which affect cognition. In cognitive therapy theory, it is assumed that depressive cognitions cause and perpetuate the mood disorder. Many such "cognitive distortions" could be looked at through the lens of "system 1" and "system 2" problems. For example, in many chronic symptom situations, system 2 may have developed a very deeply ingrained reflexively negative expectation about a great many situations, with many of these reflexes being unconscious. These reflexes could possibly have been developed based on childhood experience of parents (consistent with a sort of psychoanalytic model), but I think the most prominent source of such reflexes would simply be due to having had a particular symptom frequently for years or decades at a time, regardless of that symptom's original cause. Under such conditions the brain would change its expectation about the outcome of many events, based on the repeated negative experiences of the past (which could have been due to poor external environmental conditions, but also simply to the past chronicity of symptoms).
A proposed treatment for this phenomenon could very much be along the lines of cognitive therapy. But I might suggest extending a specific focus on depressive "cognitive distortions" etc. to work on understanding and countering bias in systems 1 and 2 in general. I propose that intellectual exercises to minimize biased interpretation of perceptions -- even if these exercises have little directly to do with psychiatric symptoms or depressive cognitions, etc. -- could be useful as a therapy for psychiatric disorders.
As outrageous as it seems, educating oneself about statistics, and practicing statistics problems repeatedly -- may be therapeutic for psychiatric illness!
I'll try to continue this discussion with more specific examples in later posts.
a discussion about psychiatry, mental illness, emotional problems, and things that help
Thursday, May 10, 2012
Wednesday, May 9, 2012
Blueberries are good for your brain
Another study published in 2012 about dietary berry intake associated with slower rates of cognitive decline:
http://www.ncbi.nlm.nih.gov/pubmed/22535616
Here's a reference to a 2010 article by Krikorian et al. published in The Journal of Agriculture and Food Chemistry:
http://www.ncbi.nlm.nih.gov/pubmed/20047325
The article describes a randomized, placebo-controlled study in which 9 elderly adults were given about 500 ml/day of blueberry juice, with another 7 given a placebo fruit juice without blueberries. The study lasted 12 weeks, at which time cognitive and mood tests were administered.
The blueberry group clearly showed better memory performance than the placebo group, and the results had a robust level of statistical significance. The blueberry group also showed some improvement in depression symptoms.
Here's a reference to another review article on this:
http://www.ncbi.nlm.nih.gov/pubmed/18211020
The authors allude to other studies showing improved cognitive performance in animals given blueberry supplementation.
In the meantime, it seems quite sound advice to include more blueberries in your diet. An excellent snack food, a much healthier alternative than junk foods such as chips or candies, etc.
Monday, February 13, 2012
Statistics in Psychiatry & Medicine
This is a continuation of my thoughts about this subject.
Statistical analysis is extremely important to understand cause & effect! A very strong factor in this issue has to do with the way the human mind interprets data; Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow. I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions. And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side. Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions. The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.
I do believe that a formal statistical treatment of data is much more common now in published research. But I am now going to argue for something that seems entirely contradictory to what I've just said above! I'll proceed by way of a fictitious example:
Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30. And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc. Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment. This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo.
This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful.
But suppose the result of the study is negative. What could this mean? Most of us would conclude that the active treatment is therefore not useful. --But I believe this is an incorrect conclusion!--
Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc. And suppose the treatment offered was neurosurgery or placebo. I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache.
Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma. These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!)
Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies. For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.
With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon: negative studies would influence practitioners to abandon the treatment strategy for the whole group.
How can this be remedied? I think the simplest method would be trivial: all research studies should include in the publication every single piece of data gathered! If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual. There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed! If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart. But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart. Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data. So intuition could still have a very important role to play in statistics & research!
After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group. (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).
I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization. Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies.
In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster. There might possibly be individuals who would respond well to such a treatment. We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.
Another complaint I have about the statistics & research culture has to do with the term "significant." I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant." Often times, a p value less than 0.05 is considered "significant". The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not. I believe that studies should simply report the p-value, and not call the results "significant" or not. This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant." There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result. Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision. Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists.
Statistical analysis is extremely important to understand cause & effect! A very strong factor in this issue has to do with the way the human mind interprets data; Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow. I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions. And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side. Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions. The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.
I do believe that a formal statistical treatment of data is much more common now in published research. But I am now going to argue for something that seems entirely contradictory to what I've just said above! I'll proceed by way of a fictitious example:
Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30. And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc. Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment. This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo.
This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful.
But suppose the result of the study is negative. What could this mean? Most of us would conclude that the active treatment is therefore not useful. --But I believe this is an incorrect conclusion!--
Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc. And suppose the treatment offered was neurosurgery or placebo. I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache.
Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma. These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!)
Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies. For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.
With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon: negative studies would influence practitioners to abandon the treatment strategy for the whole group.
How can this be remedied? I think the simplest method would be trivial: all research studies should include in the publication every single piece of data gathered! If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual. There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed! If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart. But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart. Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data. So intuition could still have a very important role to play in statistics & research!
After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group. (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).
I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization. Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies.
In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster. There might possibly be individuals who would respond well to such a treatment. We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.
Another complaint I have about the statistics & research culture has to do with the term "significant." I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant." Often times, a p value less than 0.05 is considered "significant". The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not. I believe that studies should simply report the p-value, and not call the results "significant" or not. This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant." There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result. Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision. Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists.
Tuesday, February 7, 2012
How long does it take for psychotherapy to work?
There are various research articles done in the past which describe rates of change in psychotherapy patients, some studies for example describing a plateau after about 25 sessions or so. I find these studies very weak, because of the multitude of confounding factors: severity and chronicity are obvious variables, also the type of follow-up assessments done.
In the CBT literature, a typical trial of therapy is perhaps 16-20 sessions.
In light of our evolving knowledge of neuroplasticity, and our breadth of understanding about education & learning, it seems to me that the most important variable of all is the amount of focused, deliberate practice time spent in a therapeutic activity. Oddly, most psychotherapy studies--even CBT studies--do not look at how many hours of practice patients have done in-between therapy appointments. This would be like looking at the progress of music students based on how many lessons they get, without taking into account how much they practice during the week.
I have often compared psychological symptom change to the changes which occur, for example, with language learning or with learning a musical instrument.
So, I believe that a reasonable estimate of the amount of time required in psychotherapy depends on what one is trying to accomplish:
-Some types of therapeutic problems might be resolved with a few hours of work, or with a single feedback session with a therapist. This would be akin to a musician with some kind of technical problem who needs just some clear instruction about a few techniques or exercises to practice. Or it might be akin to a person who is already fluent in a foreign language, but needs a few tips from a local speaker about idioms, or perhaps some help with editing or grammar in a written text.
-Many more therapeutic problems could improve with perhaps 100 hours of work. This would be like learning to swim or skate competently if you have never done these activities before. Regular lessons ("therapy") would most likely speed up your rate of progress substantially. But most of those 100 hours would be practice on your own, unless you're okay with the progress taking place over a year or more. With the language analogy, think of how fluent you might become in a foreign language with 100 hours of focused, deliberate practice. For most of us, this would lead to an ability to have a very simple conversational exchange, perhaps to get around in the most basic way in another country.
-A much larger change is possible with 1000 hours of work: with music, one could become quite fluent but probably not an expert. With a foreign language, comfortable fluency would probably be possible, though probably still with an accent and a preference for the old language.
-With 5000-10000 hours of work (this is several hours per day over a decade or more) one could become an expert at a skill or a language in most cases.
In psychotherapy, another confound though is whether the times in-between "practice sessions" lead to a regression of learning. An educational analogy would be of practicing math exercises an hour per day with a good teacher, but then practicing another 8 hours a day with another teacher whose methods contradict the first. Often times, learning will still take place with this paradigm, but it might be much less efficient. Persistent mental habits, in the context of mental illnesses, can be akin to the "second teacher" in this metaphor, and unfortunately they do tend to plague people for many hours per day.
This reminds me of the evolving evidence about stroke rehabilitation & neuroplasticity: substantial brain change can happen in as short a time as 16 days--but it requires very strict inhibition or constraint of the pathways which obstruct rehabilitation. (note: 16 days of continuous "immersion" = 16*24 = 384 hours!) In stroke rehabilitation, the neuroplasticity effect is much more pronounced if the unaffected limb is restrained, compelling the brain to optimize improvement in function of the afflicted limb. Here is a recent reference showing rapid brain changes following limb immobilization: http://www.ncbi.nlm.nih.gov/pubmed/22249495
In conclusion, I believe that it is important to have a clear idea about how much time and deliberate, focused effort are needed to change psychological symptoms or problems through therapeutic activities. A little bit of meaningful change could happen with just a few hours of work. In most cases, 100 hours is needed simply to get started with a new skill. 1000 hours is needed to become fluent. And 5000-10000 hours is needed to master something. These times would be much longer still if the periods between practice sessions are regressive. In the case of addictions, eating disorders, self-harm, or OCD, for example, relapses or even fantasies about relapse will substantially prolong the time it takes for any therapeutic effort to help. Of course, it is the nature of these problems to have relapses, or fantasies about relapse--so one should let go of the temptation to feel guilty if there are relapses. But if one is struggling with an addictive problem of this sort, it may help to remind oneself that the brain can change very substantially if one can hold onto to quite a strict behavioural pattern for the hundreds or thousands of hours which are needed.
As a visual reminder of this process, start with an empty transparent bottle, which can hold 250-500 mLof liquid (1-2 cups), and which can be tightly sealed with a small cap. Add one drop of water every time you invest one hour of focused, deliberate therapeutic work. The amount of time you need to spend in therapy depends on your goal. If the goal is total mastery--then you must fill the entire bottle. If simple competence in a new skill is an adequate goal, then you must fill just the cap of the bottle. If there are activities in your day which contradict the therapeutic work, it would be like a little bit of water leaking out of your bottle. So you must also attend to repairing any "leaks." But every hour of your effort counts towards your growth.
In the CBT literature, a typical trial of therapy is perhaps 16-20 sessions.
In light of our evolving knowledge of neuroplasticity, and our breadth of understanding about education & learning, it seems to me that the most important variable of all is the amount of focused, deliberate practice time spent in a therapeutic activity. Oddly, most psychotherapy studies--even CBT studies--do not look at how many hours of practice patients have done in-between therapy appointments. This would be like looking at the progress of music students based on how many lessons they get, without taking into account how much they practice during the week.
I have often compared psychological symptom change to the changes which occur, for example, with language learning or with learning a musical instrument.
So, I believe that a reasonable estimate of the amount of time required in psychotherapy depends on what one is trying to accomplish:
-Some types of therapeutic problems might be resolved with a few hours of work, or with a single feedback session with a therapist. This would be akin to a musician with some kind of technical problem who needs just some clear instruction about a few techniques or exercises to practice. Or it might be akin to a person who is already fluent in a foreign language, but needs a few tips from a local speaker about idioms, or perhaps some help with editing or grammar in a written text.
-Many more therapeutic problems could improve with perhaps 100 hours of work. This would be like learning to swim or skate competently if you have never done these activities before. Regular lessons ("therapy") would most likely speed up your rate of progress substantially. But most of those 100 hours would be practice on your own, unless you're okay with the progress taking place over a year or more. With the language analogy, think of how fluent you might become in a foreign language with 100 hours of focused, deliberate practice. For most of us, this would lead to an ability to have a very simple conversational exchange, perhaps to get around in the most basic way in another country.
-A much larger change is possible with 1000 hours of work: with music, one could become quite fluent but probably not an expert. With a foreign language, comfortable fluency would probably be possible, though probably still with an accent and a preference for the old language.
-With 5000-10000 hours of work (this is several hours per day over a decade or more) one could become an expert at a skill or a language in most cases.
In psychotherapy, another confound though is whether the times in-between "practice sessions" lead to a regression of learning. An educational analogy would be of practicing math exercises an hour per day with a good teacher, but then practicing another 8 hours a day with another teacher whose methods contradict the first. Often times, learning will still take place with this paradigm, but it might be much less efficient. Persistent mental habits, in the context of mental illnesses, can be akin to the "second teacher" in this metaphor, and unfortunately they do tend to plague people for many hours per day.
This reminds me of the evolving evidence about stroke rehabilitation & neuroplasticity: substantial brain change can happen in as short a time as 16 days--but it requires very strict inhibition or constraint of the pathways which obstruct rehabilitation. (note: 16 days of continuous "immersion" = 16*24 = 384 hours!) In stroke rehabilitation, the neuroplasticity effect is much more pronounced if the unaffected limb is restrained, compelling the brain to optimize improvement in function of the afflicted limb. Here is a recent reference showing rapid brain changes following limb immobilization: http://www.ncbi.nlm.nih.gov/pubmed/22249495
In conclusion, I believe that it is important to have a clear idea about how much time and deliberate, focused effort are needed to change psychological symptoms or problems through therapeutic activities. A little bit of meaningful change could happen with just a few hours of work. In most cases, 100 hours is needed simply to get started with a new skill. 1000 hours is needed to become fluent. And 5000-10000 hours is needed to master something. These times would be much longer still if the periods between practice sessions are regressive. In the case of addictions, eating disorders, self-harm, or OCD, for example, relapses or even fantasies about relapse will substantially prolong the time it takes for any therapeutic effort to help. Of course, it is the nature of these problems to have relapses, or fantasies about relapse--so one should let go of the temptation to feel guilty if there are relapses. But if one is struggling with an addictive problem of this sort, it may help to remind oneself that the brain can change very substantially if one can hold onto to quite a strict behavioural pattern for the hundreds or thousands of hours which are needed.
As a visual reminder of this process, start with an empty transparent bottle, which can hold 250-500 mLof liquid (1-2 cups), and which can be tightly sealed with a small cap. Add one drop of water every time you invest one hour of focused, deliberate therapeutic work. The amount of time you need to spend in therapy depends on your goal. If the goal is total mastery--then you must fill the entire bottle. If simple competence in a new skill is an adequate goal, then you must fill just the cap of the bottle. If there are activities in your day which contradict the therapeutic work, it would be like a little bit of water leaking out of your bottle. So you must also attend to repairing any "leaks." But every hour of your effort counts towards your growth.
Monday, February 6, 2012
Scopolamine for Depression
Scopolamine is an acetylcholine-receptor blocker, which is usually used to treat or prevent motion sickness. Some recent studies show that it might be useful to treat depression. Here is some background, followed by a few references to research studies:
The old tricyclic antidepressants (such as amitriptyline) were shown over many years to work very well for many people. Unfortunately, they are laden with side-effect problems and a significant toxicity risk (they can be lethal in overdose). The side effects are due to various different pharmacologic effects, particularly the blockade of acetylcholine and histamine receptors. Newer antidepressants, such as those in the SSRI group, have very few such receptor blockade effects.
In some studies, however, the old tricyclics actually are superior to newer antidepressants, especially for severely ill hospitalized depression patients.
It is interesting to consider whether some of the receptor blockade effects which were previously considered just nuisances or side-effect problems, could actually be part of the antidepressant activity. Or, in some cases, drugs which primarily have receptor blockade side effects may actually be indirectly modulating various other neurotransmitter systems.
A clear precedent exists in this regard: clozapine is undoubtedly the most effective antipsychotic, but it is loaded with multiple side effects and receptor blockades. It may be --at least in part-- because of the receptor blockades, not in spite of them, that it works so well.
Another example of this effect, quite possibly, is related to what I call the "active placebo" literature (I have referred to it elsewhere on this blog: http://garthkroeker.blogspot.com/2009/03/active-placebos.html) The active placebos used in these studies usually had side effects due to acetylcholine blockade, and the active placebo groups usually improved quite a bit more than those with inert placebos. This suggests another interpretation of the "active placebo" effect: perhaps it is not simply the existence of side-effects that psychologically boosts a placebo effect here, it is that the side-effects themselves are due to a pharmacologic action that is actually of direct relevance to the treatment of depression.
Here are some studies looking at scopolamine infusions to treat depression:
http://www.ncbi.nlm.nih.gov/pubmed/17015814
This 2006 study from Archives of General Psychiatry showed that 4 mcg/kg IV infusions of scopolamine (given in 3 doses, every 3-5 days) led to a rapid reduction in depression symptoms (halving of the MADRS score), with a pronounced difference from placebo. Of particular note is that the cohort consisted mainly of chronically depressed patients with comorbidities and unsuccessful trials of other treatments. Surprisingly, there were few side effect problems, aside from a higher rate of the expected anticholinergic-induced dry mouth and dizziness.
http://www.ncbi.nlm.nih.gov/pubmed/20074703
This is a replication of the study mentioned above, published in Biological Psychiatry in 2010.
http://www.ncbi.nlm.nih.gov/pubmed/20736989
Another similar study, this time showing a greater effect in women; again a 4 mcg/kg infusion protocol was used.
http://www.ncbi.nlm.nih.gov/pubmed/20926947
evidence from an animal study that scopolamine --or acetylcholine blockade in general-- affects NMDA-related activity, in general antagonizing the effects of NMDA. This is consistent with a theory that scopolamine may work in a similar manner to the NMDA-blocker ketamine (which has been associated with rapid improvement in depression symptoms) but without nearly as much risk of dangerous medical or neuropsychiatric side-effects.
http://www.ncbi.nlm.nih.gov/pubmed/21306419
This article looks at the pharmacokinetics of infused scopolamine, and also gives a detailed account of side-effects. There are notable cognitive side-effects, such as reduced efficiency of short-term memory.
http://www.ncbi.nlm.nih.gov/pubmed/16719539
This study looks at dosing scopolamine as a patch. The patch is designed to give a rapidly absorbed loading dose, then a gradual release to maintain a fairly constant level over 3 days. My own estimation, based on reviewing this information, is that a scopolamine patch would roughly approximate the IV doses used in the depression treatment studies described above, though of course the serum levels would be more constant.
Transdermal scopolamine (patches) are available in Canada from pharmacists without a physician's prescription.
While this is an interesting--though far from proven-- treatment idea, it is very important to be aware of anticholinergic side effects, which at times could be physically and psychologically unpleasant. At worst, cognitive impairment or delirium could occur as a result of excessive cholinergic blockade. Therefore, any attempt to treat psychiatric symptoms using anticholinergics should be undertaken with close collaboration with a psychiatrist.
The old tricyclic antidepressants (such as amitriptyline) were shown over many years to work very well for many people. Unfortunately, they are laden with side-effect problems and a significant toxicity risk (they can be lethal in overdose). The side effects are due to various different pharmacologic effects, particularly the blockade of acetylcholine and histamine receptors. Newer antidepressants, such as those in the SSRI group, have very few such receptor blockade effects.
In some studies, however, the old tricyclics actually are superior to newer antidepressants, especially for severely ill hospitalized depression patients.
It is interesting to consider whether some of the receptor blockade effects which were previously considered just nuisances or side-effect problems, could actually be part of the antidepressant activity. Or, in some cases, drugs which primarily have receptor blockade side effects may actually be indirectly modulating various other neurotransmitter systems.
A clear precedent exists in this regard: clozapine is undoubtedly the most effective antipsychotic, but it is loaded with multiple side effects and receptor blockades. It may be --at least in part-- because of the receptor blockades, not in spite of them, that it works so well.
Another example of this effect, quite possibly, is related to what I call the "active placebo" literature (I have referred to it elsewhere on this blog: http://garthkroeker.blogspot.com/2009/03/active-placebos.html) The active placebos used in these studies usually had side effects due to acetylcholine blockade, and the active placebo groups usually improved quite a bit more than those with inert placebos. This suggests another interpretation of the "active placebo" effect: perhaps it is not simply the existence of side-effects that psychologically boosts a placebo effect here, it is that the side-effects themselves are due to a pharmacologic action that is actually of direct relevance to the treatment of depression.
Here are some studies looking at scopolamine infusions to treat depression:
http://www.ncbi.nlm.nih.gov/pubmed/17015814
This 2006 study from Archives of General Psychiatry showed that 4 mcg/kg IV infusions of scopolamine (given in 3 doses, every 3-5 days) led to a rapid reduction in depression symptoms (halving of the MADRS score), with a pronounced difference from placebo. Of particular note is that the cohort consisted mainly of chronically depressed patients with comorbidities and unsuccessful trials of other treatments. Surprisingly, there were few side effect problems, aside from a higher rate of the expected anticholinergic-induced dry mouth and dizziness.
http://www.ncbi.nlm.nih.gov/pubmed/20074703
This is a replication of the study mentioned above, published in Biological Psychiatry in 2010.
http://www.ncbi.nlm.nih.gov/pubmed/20736989
Another similar study, this time showing a greater effect in women; again a 4 mcg/kg infusion protocol was used.
http://www.ncbi.nlm.nih.gov/pubmed/20926947
evidence from an animal study that scopolamine --or acetylcholine blockade in general-- affects NMDA-related activity, in general antagonizing the effects of NMDA. This is consistent with a theory that scopolamine may work in a similar manner to the NMDA-blocker ketamine (which has been associated with rapid improvement in depression symptoms) but without nearly as much risk of dangerous medical or neuropsychiatric side-effects.
http://www.ncbi.nlm.nih.gov/pubmed/21306419
This article looks at the pharmacokinetics of infused scopolamine, and also gives a detailed account of side-effects. There are notable cognitive side-effects, such as reduced efficiency of short-term memory.
http://www.ncbi.nlm.nih.gov/pubmed/16719539
This study looks at dosing scopolamine as a patch. The patch is designed to give a rapidly absorbed loading dose, then a gradual release to maintain a fairly constant level over 3 days. My own estimation, based on reviewing this information, is that a scopolamine patch would roughly approximate the IV doses used in the depression treatment studies described above, though of course the serum levels would be more constant.
Transdermal scopolamine (patches) are available in Canada from pharmacists without a physician's prescription.
While this is an interesting--though far from proven-- treatment idea, it is very important to be aware of anticholinergic side effects, which at times could be physically and psychologically unpleasant. At worst, cognitive impairment or delirium could occur as a result of excessive cholinergic blockade. Therefore, any attempt to treat psychiatric symptoms using anticholinergics should be undertaken with close collaboration with a psychiatrist.
Subscribe to:
Posts (Atom)