Asperger Syndrome is an example of a mild autism-spectrum disorder. Individuals with this condition have difficulty with social and communicative skills, particularly those skills requiring an understanding and awareness of others' emotional states, and those requiring emotional expressivity in speech and non-verbal gestures. Usually individuals with an autistic-spectrum condition have a diminished interest in relationships with other people, and therefore prefer solitary activities.
The possibility of Asperger Syndrome should be considered, in my opinion, when the history is of social difficulties & social withdrawal. Often times these problems could be the result of social anxiety, depression, etc. but I think we realize these days that mild autistic symptoms are more common in the population. This may indeed be due to an increase in the rate of autistic symptoms over time, but it could conceivably be due as well to being more aware of this syndrome, and therefore more able to recognize it.
I have found a site with some good tests for autism-spectrum syndromes, from the Cambridge Autism Research Centre. Here is a link to the tests: http://www.autismresearchcentre.com/arc_tests
The particuar tests I find most useful are the Autism Spectrum Quotient, which is a symptom checklist. Average scores on this checklist from a healthy population are about 15 for females and 18 for males, slightly higher for individuals in an analytical scientific profession such as mathematics (typically in the low 20's), but above 30 (typically 35 or higher) for individuals with an autistic spectrum disorder.
I have a few criticisms about some of the autism quotient questions, which I think cause the questionnaire to spuriously inflate the scores of people who are not autistic at all, but rather either socially anxious (e.g. the question "I find social situations easy") , socially more introverted but still very attuned to emotions and other people (e.g. the question "I would rather go to a library than a party"), or having particular intellectual interests (e.g. "I am fascinated by dates" or "I am fascinated by numbers"). Thus, shy people, historians, and mathematicians may have scores on the autism spectrum quotient which suggest that they are more "autistic" than they really are. Here, I would define an autism-spectrum symptom to be much more specifically addressed by other questions having to do with reduced awareness of other people's emotional states, clear reduced interest in social engagement or communication, and impairment in understanding social norms in verbal and non-verbal interaction. Yet, I think the questionnaire is sensitive, with a large difference in scores between Asperger patients and control patients without Asperger Syndrome.
Also on this site there is an interesting set of tests having to do with accurately identifying emotions in pictures, sound clips, or film. One particularly useful example is the "eyes test," in which you have to identify the emotion represented by a picture of someone's eyes. Here again the mean in a non-Asperger population is about 26-28 correct, but an average of 22 in Asperger's patients. If you try this test yourself, I suggest that you review the instructions first, and also make sure you are familiar with all the vocabulary words used in the test (these are all available in the downloads). The test could be biased against individuals who are just a bit less familiar with the vocabulary words used.
It is on my list of things to write about to discuss the issue of autism-spectrum disorders further, since it is a theme that comes up not infrequently, especially in a university population. Part of the discussion could include discussion about things that might help (such as social skills training, etc.) but also of understanding the issue, in its mild forms at least, as a character variant with a variety of positive aspects for the individual and for society, rather than as an overt "pathology." In any case, I think understanding and discussion will help.
a discussion about psychiatry, mental illness, emotional problems, and things that help
Tuesday, July 17, 2012
Thursday, May 10, 2012
"The Lazy Controller" -- reflections about Kahneman's book
This is the first of a series of posts I've been planning based on Daniel Kahneman's book Thinking, Fast and Slow.
I found this book to be excellent, an account of how the brain is very biased in its mode of forming decisions and judgments, loaded with very abundant solid research over 40-50 years in the social and cognitive psychology literature.
My purpose of reflecting on this book in detail is hopefully to add ideas about understanding the brain's biases in the context of psychiatric symptoms, and then to propose therapeutic exercises which could counter or resolve the biases, and strengthen cognitive faculties which may intrinsically be weak.
----
The first few chapters of this book are introductions to the idea that the brain can be understood as having two main modes of processing and responding to information; the author calls these "system 1" and "system 2."
System 1 is rapid, automatic, reflexive, and often unconscious. It is the dominant system in most cases. It is the foundation of "intuition." It is built upon deeply engrained memory for similar situations. It is a foundation of all talent and mastery of skills, in that it permits one to perform a difficult task with ease, without even having to "think" about it (e.g. for a master musician, athlete, surgeon, or really any other occupation). But system 1 is extremely prone to biases. Its mode of processing data is based on what it has experienced repeatedly in the past -- so it is a kind of autopilot -- and it can be very easily fooled (yet, on the other hand, its rich set of past associations may be a fertile ground for imagination, creativity, and inspired insight).
System 2 is a highly conscious, intellectually analytical mode. It permits us to systematically solve a multi-step difficult problem of any sort. It permits us to cope with situations which differ from an overlearned template. It would be like the true pilot landing a plane in difficult or rapidly changing conditions, instead of letting the autopilot trying to land it.
One of Kahneman's main theses is that system 2 can be easily fooled too! While system 2 is the only cognitive mechanism which could prevent biased interpretation of information, Kahneman shows that system 2 is intrinsically "lazy." Because engaging system 2 is effortful -- it demands energy -- we are strongly drawn to intellectual processes which minimize the energy expenditure. If system 1 has an automatic, "intuitive" answer for us, then we would tend not to engage system 2 at all. And if a rapid engagement of system 2 appears to be sufficient to get an answer, we will usually not spend extra time or energy. Thus system 2 can easily lead us to a premature and inaccurate conclusion.
Another of Kahneman's main theses has to do with the nature of phenomena, cause-and-effect, and data in general. Accurate conclusions about cause and effect often require a type of statistical analysis (even a simple one, employing quite straightforward rules of probability), but Kahneman shows that the brain (both system 1 and system 2) are not intrinsically designed to think in a statistical fashion. Therefore we tend to greatly distort the likelihood of various types of events.
An area I would want to extend beyond Kahneman's main theses is that I suspect both system 1 and system 2 could be very specifically trained to reduce biases. Kahneman seems somewhat resigned to conclude that the brain simply can't resist the types of biases he describes (citing, for example, profoundly biased thinking in his psychology student subjects--or even in himself-- whose biases were evidently not reduced by understanding and education). But I do not see that very much work has been done to very specifically and intensively train the mind to reduce biases -- I think that simply learning about bias is not enough, it is something that must be practiced for hundreds of hours, just like any other skill. (this reminds me of something said in psychotherapy: "insight alone is not enough to effect change -- it must be accompanied by action.")
I believe this is relevant to psychiatry, in that all mental illnesses (such as depression, anxiety disorders, personality disorders, psychosis, and attention/learning disorders) contain symptoms which affect cognition. In cognitive therapy theory, it is assumed that depressive cognitions cause and perpetuate the mood disorder. Many such "cognitive distortions" could be looked at through the lens of "system 1" and "system 2" problems. For example, in many chronic symptom situations, system 2 may have developed a very deeply ingrained reflexively negative expectation about a great many situations, with many of these reflexes being unconscious. These reflexes could possibly have been developed based on childhood experience of parents (consistent with a sort of psychoanalytic model), but I think the most prominent source of such reflexes would simply be due to having had a particular symptom frequently for years or decades at a time, regardless of that symptom's original cause. Under such conditions the brain would change its expectation about the outcome of many events, based on the repeated negative experiences of the past (which could have been due to poor external environmental conditions, but also simply to the past chronicity of symptoms).
A proposed treatment for this phenomenon could very much be along the lines of cognitive therapy. But I might suggest extending a specific focus on depressive "cognitive distortions" etc. to work on understanding and countering bias in systems 1 and 2 in general. I propose that intellectual exercises to minimize biased interpretation of perceptions -- even if these exercises have little directly to do with psychiatric symptoms or depressive cognitions, etc. -- could be useful as a therapy for psychiatric disorders.
As outrageous as it seems, educating oneself about statistics, and practicing statistics problems repeatedly -- may be therapeutic for psychiatric illness!
I'll try to continue this discussion with more specific examples in later posts.
I found this book to be excellent, an account of how the brain is very biased in its mode of forming decisions and judgments, loaded with very abundant solid research over 40-50 years in the social and cognitive psychology literature.
My purpose of reflecting on this book in detail is hopefully to add ideas about understanding the brain's biases in the context of psychiatric symptoms, and then to propose therapeutic exercises which could counter or resolve the biases, and strengthen cognitive faculties which may intrinsically be weak.
----
The first few chapters of this book are introductions to the idea that the brain can be understood as having two main modes of processing and responding to information; the author calls these "system 1" and "system 2."
System 1 is rapid, automatic, reflexive, and often unconscious. It is the dominant system in most cases. It is the foundation of "intuition." It is built upon deeply engrained memory for similar situations. It is a foundation of all talent and mastery of skills, in that it permits one to perform a difficult task with ease, without even having to "think" about it (e.g. for a master musician, athlete, surgeon, or really any other occupation). But system 1 is extremely prone to biases. Its mode of processing data is based on what it has experienced repeatedly in the past -- so it is a kind of autopilot -- and it can be very easily fooled (yet, on the other hand, its rich set of past associations may be a fertile ground for imagination, creativity, and inspired insight).
System 2 is a highly conscious, intellectually analytical mode. It permits us to systematically solve a multi-step difficult problem of any sort. It permits us to cope with situations which differ from an overlearned template. It would be like the true pilot landing a plane in difficult or rapidly changing conditions, instead of letting the autopilot trying to land it.
One of Kahneman's main theses is that system 2 can be easily fooled too! While system 2 is the only cognitive mechanism which could prevent biased interpretation of information, Kahneman shows that system 2 is intrinsically "lazy." Because engaging system 2 is effortful -- it demands energy -- we are strongly drawn to intellectual processes which minimize the energy expenditure. If system 1 has an automatic, "intuitive" answer for us, then we would tend not to engage system 2 at all. And if a rapid engagement of system 2 appears to be sufficient to get an answer, we will usually not spend extra time or energy. Thus system 2 can easily lead us to a premature and inaccurate conclusion.
Another of Kahneman's main theses has to do with the nature of phenomena, cause-and-effect, and data in general. Accurate conclusions about cause and effect often require a type of statistical analysis (even a simple one, employing quite straightforward rules of probability), but Kahneman shows that the brain (both system 1 and system 2) are not intrinsically designed to think in a statistical fashion. Therefore we tend to greatly distort the likelihood of various types of events.
An area I would want to extend beyond Kahneman's main theses is that I suspect both system 1 and system 2 could be very specifically trained to reduce biases. Kahneman seems somewhat resigned to conclude that the brain simply can't resist the types of biases he describes (citing, for example, profoundly biased thinking in his psychology student subjects--or even in himself-- whose biases were evidently not reduced by understanding and education). But I do not see that very much work has been done to very specifically and intensively train the mind to reduce biases -- I think that simply learning about bias is not enough, it is something that must be practiced for hundreds of hours, just like any other skill. (this reminds me of something said in psychotherapy: "insight alone is not enough to effect change -- it must be accompanied by action.")
I believe this is relevant to psychiatry, in that all mental illnesses (such as depression, anxiety disorders, personality disorders, psychosis, and attention/learning disorders) contain symptoms which affect cognition. In cognitive therapy theory, it is assumed that depressive cognitions cause and perpetuate the mood disorder. Many such "cognitive distortions" could be looked at through the lens of "system 1" and "system 2" problems. For example, in many chronic symptom situations, system 2 may have developed a very deeply ingrained reflexively negative expectation about a great many situations, with many of these reflexes being unconscious. These reflexes could possibly have been developed based on childhood experience of parents (consistent with a sort of psychoanalytic model), but I think the most prominent source of such reflexes would simply be due to having had a particular symptom frequently for years or decades at a time, regardless of that symptom's original cause. Under such conditions the brain would change its expectation about the outcome of many events, based on the repeated negative experiences of the past (which could have been due to poor external environmental conditions, but also simply to the past chronicity of symptoms).
A proposed treatment for this phenomenon could very much be along the lines of cognitive therapy. But I might suggest extending a specific focus on depressive "cognitive distortions" etc. to work on understanding and countering bias in systems 1 and 2 in general. I propose that intellectual exercises to minimize biased interpretation of perceptions -- even if these exercises have little directly to do with psychiatric symptoms or depressive cognitions, etc. -- could be useful as a therapy for psychiatric disorders.
As outrageous as it seems, educating oneself about statistics, and practicing statistics problems repeatedly -- may be therapeutic for psychiatric illness!
I'll try to continue this discussion with more specific examples in later posts.
Wednesday, May 9, 2012
Blueberries are good for your brain
Another study published in 2012 about dietary berry intake associated with slower rates of cognitive decline:
http://www.ncbi.nlm.nih.gov/pubmed/22535616
Here's a reference to a 2010 article by Krikorian et al. published in The Journal of Agriculture and Food Chemistry:
http://www.ncbi.nlm.nih.gov/pubmed/20047325
The article describes a randomized, placebo-controlled study in which 9 elderly adults were given about 500 ml/day of blueberry juice, with another 7 given a placebo fruit juice without blueberries. The study lasted 12 weeks, at which time cognitive and mood tests were administered.
The blueberry group clearly showed better memory performance than the placebo group, and the results had a robust level of statistical significance. The blueberry group also showed some improvement in depression symptoms.
Here's a reference to another review article on this:
http://www.ncbi.nlm.nih.gov/pubmed/18211020
The authors allude to other studies showing improved cognitive performance in animals given blueberry supplementation.
In the meantime, it seems quite sound advice to include more blueberries in your diet. An excellent snack food, a much healthier alternative than junk foods such as chips or candies, etc.
Monday, February 13, 2012
Statistics in Psychiatry & Medicine
This is a continuation of my thoughts about this subject.
Statistical analysis is extremely important to understand cause & effect! A very strong factor in this issue has to do with the way the human mind interprets data; Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow. I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions. And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side. Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions. The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.
I do believe that a formal statistical treatment of data is much more common now in published research. But I am now going to argue for something that seems entirely contradictory to what I've just said above! I'll proceed by way of a fictitious example:
Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30. And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc. Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment. This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo.
This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful.
But suppose the result of the study is negative. What could this mean? Most of us would conclude that the active treatment is therefore not useful. --But I believe this is an incorrect conclusion!--
Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc. And suppose the treatment offered was neurosurgery or placebo. I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache.
Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma. These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!)
Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies. For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.
With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon: negative studies would influence practitioners to abandon the treatment strategy for the whole group.
How can this be remedied? I think the simplest method would be trivial: all research studies should include in the publication every single piece of data gathered! If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual. There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed! If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart. But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart. Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data. So intuition could still have a very important role to play in statistics & research!
After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group. (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).
I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization. Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies.
In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster. There might possibly be individuals who would respond well to such a treatment. We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.
Another complaint I have about the statistics & research culture has to do with the term "significant." I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant." Often times, a p value less than 0.05 is considered "significant". The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not. I believe that studies should simply report the p-value, and not call the results "significant" or not. This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant." There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result. Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision. Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists.
Statistical analysis is extremely important to understand cause & effect! A very strong factor in this issue has to do with the way the human mind interprets data; Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow. I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions. And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side. Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions. The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.
I do believe that a formal statistical treatment of data is much more common now in published research. But I am now going to argue for something that seems entirely contradictory to what I've just said above! I'll proceed by way of a fictitious example:
Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30. And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc. Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment. This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo.
This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful.
But suppose the result of the study is negative. What could this mean? Most of us would conclude that the active treatment is therefore not useful. --But I believe this is an incorrect conclusion!--
Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc. And suppose the treatment offered was neurosurgery or placebo. I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache.
Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma. These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!)
Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies. For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.
With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon: negative studies would influence practitioners to abandon the treatment strategy for the whole group.
How can this be remedied? I think the simplest method would be trivial: all research studies should include in the publication every single piece of data gathered! If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual. There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed! If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart. But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart. Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data. So intuition could still have a very important role to play in statistics & research!
After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group. (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).
I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization. Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies.
In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster. There might possibly be individuals who would respond well to such a treatment. We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.
Another complaint I have about the statistics & research culture has to do with the term "significant." I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant." Often times, a p value less than 0.05 is considered "significant". The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not. I believe that studies should simply report the p-value, and not call the results "significant" or not. This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant." There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result. Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision. Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists.
Tuesday, February 7, 2012
How long does it take for psychotherapy to work?
There are various research articles done in the past which describe rates of change in psychotherapy patients, some studies for example describing a plateau after about 25 sessions or so. I find these studies very weak, because of the multitude of confounding factors: severity and chronicity are obvious variables, also the type of follow-up assessments done.
In the CBT literature, a typical trial of therapy is perhaps 16-20 sessions.
In light of our evolving knowledge of neuroplasticity, and our breadth of understanding about education & learning, it seems to me that the most important variable of all is the amount of focused, deliberate practice time spent in a therapeutic activity. Oddly, most psychotherapy studies--even CBT studies--do not look at how many hours of practice patients have done in-between therapy appointments. This would be like looking at the progress of music students based on how many lessons they get, without taking into account how much they practice during the week.
I have often compared psychological symptom change to the changes which occur, for example, with language learning or with learning a musical instrument.
So, I believe that a reasonable estimate of the amount of time required in psychotherapy depends on what one is trying to accomplish:
-Some types of therapeutic problems might be resolved with a few hours of work, or with a single feedback session with a therapist. This would be akin to a musician with some kind of technical problem who needs just some clear instruction about a few techniques or exercises to practice. Or it might be akin to a person who is already fluent in a foreign language, but needs a few tips from a local speaker about idioms, or perhaps some help with editing or grammar in a written text.
-Many more therapeutic problems could improve with perhaps 100 hours of work. This would be like learning to swim or skate competently if you have never done these activities before. Regular lessons ("therapy") would most likely speed up your rate of progress substantially. But most of those 100 hours would be practice on your own, unless you're okay with the progress taking place over a year or more. With the language analogy, think of how fluent you might become in a foreign language with 100 hours of focused, deliberate practice. For most of us, this would lead to an ability to have a very simple conversational exchange, perhaps to get around in the most basic way in another country.
-A much larger change is possible with 1000 hours of work: with music, one could become quite fluent but probably not an expert. With a foreign language, comfortable fluency would probably be possible, though probably still with an accent and a preference for the old language.
-With 5000-10000 hours of work (this is several hours per day over a decade or more) one could become an expert at a skill or a language in most cases.
In psychotherapy, another confound though is whether the times in-between "practice sessions" lead to a regression of learning. An educational analogy would be of practicing math exercises an hour per day with a good teacher, but then practicing another 8 hours a day with another teacher whose methods contradict the first. Often times, learning will still take place with this paradigm, but it might be much less efficient. Persistent mental habits, in the context of mental illnesses, can be akin to the "second teacher" in this metaphor, and unfortunately they do tend to plague people for many hours per day.
This reminds me of the evolving evidence about stroke rehabilitation & neuroplasticity: substantial brain change can happen in as short a time as 16 days--but it requires very strict inhibition or constraint of the pathways which obstruct rehabilitation. (note: 16 days of continuous "immersion" = 16*24 = 384 hours!) In stroke rehabilitation, the neuroplasticity effect is much more pronounced if the unaffected limb is restrained, compelling the brain to optimize improvement in function of the afflicted limb. Here is a recent reference showing rapid brain changes following limb immobilization: http://www.ncbi.nlm.nih.gov/pubmed/22249495
In conclusion, I believe that it is important to have a clear idea about how much time and deliberate, focused effort are needed to change psychological symptoms or problems through therapeutic activities. A little bit of meaningful change could happen with just a few hours of work. In most cases, 100 hours is needed simply to get started with a new skill. 1000 hours is needed to become fluent. And 5000-10000 hours is needed to master something. These times would be much longer still if the periods between practice sessions are regressive. In the case of addictions, eating disorders, self-harm, or OCD, for example, relapses or even fantasies about relapse will substantially prolong the time it takes for any therapeutic effort to help. Of course, it is the nature of these problems to have relapses, or fantasies about relapse--so one should let go of the temptation to feel guilty if there are relapses. But if one is struggling with an addictive problem of this sort, it may help to remind oneself that the brain can change very substantially if one can hold onto to quite a strict behavioural pattern for the hundreds or thousands of hours which are needed.
As a visual reminder of this process, start with an empty transparent bottle, which can hold 250-500 mLof liquid (1-2 cups), and which can be tightly sealed with a small cap. Add one drop of water every time you invest one hour of focused, deliberate therapeutic work. The amount of time you need to spend in therapy depends on your goal. If the goal is total mastery--then you must fill the entire bottle. If simple competence in a new skill is an adequate goal, then you must fill just the cap of the bottle. If there are activities in your day which contradict the therapeutic work, it would be like a little bit of water leaking out of your bottle. So you must also attend to repairing any "leaks." But every hour of your effort counts towards your growth.
In the CBT literature, a typical trial of therapy is perhaps 16-20 sessions.
In light of our evolving knowledge of neuroplasticity, and our breadth of understanding about education & learning, it seems to me that the most important variable of all is the amount of focused, deliberate practice time spent in a therapeutic activity. Oddly, most psychotherapy studies--even CBT studies--do not look at how many hours of practice patients have done in-between therapy appointments. This would be like looking at the progress of music students based on how many lessons they get, without taking into account how much they practice during the week.
I have often compared psychological symptom change to the changes which occur, for example, with language learning or with learning a musical instrument.
So, I believe that a reasonable estimate of the amount of time required in psychotherapy depends on what one is trying to accomplish:
-Some types of therapeutic problems might be resolved with a few hours of work, or with a single feedback session with a therapist. This would be akin to a musician with some kind of technical problem who needs just some clear instruction about a few techniques or exercises to practice. Or it might be akin to a person who is already fluent in a foreign language, but needs a few tips from a local speaker about idioms, or perhaps some help with editing or grammar in a written text.
-Many more therapeutic problems could improve with perhaps 100 hours of work. This would be like learning to swim or skate competently if you have never done these activities before. Regular lessons ("therapy") would most likely speed up your rate of progress substantially. But most of those 100 hours would be practice on your own, unless you're okay with the progress taking place over a year or more. With the language analogy, think of how fluent you might become in a foreign language with 100 hours of focused, deliberate practice. For most of us, this would lead to an ability to have a very simple conversational exchange, perhaps to get around in the most basic way in another country.
-A much larger change is possible with 1000 hours of work: with music, one could become quite fluent but probably not an expert. With a foreign language, comfortable fluency would probably be possible, though probably still with an accent and a preference for the old language.
-With 5000-10000 hours of work (this is several hours per day over a decade or more) one could become an expert at a skill or a language in most cases.
In psychotherapy, another confound though is whether the times in-between "practice sessions" lead to a regression of learning. An educational analogy would be of practicing math exercises an hour per day with a good teacher, but then practicing another 8 hours a day with another teacher whose methods contradict the first. Often times, learning will still take place with this paradigm, but it might be much less efficient. Persistent mental habits, in the context of mental illnesses, can be akin to the "second teacher" in this metaphor, and unfortunately they do tend to plague people for many hours per day.
This reminds me of the evolving evidence about stroke rehabilitation & neuroplasticity: substantial brain change can happen in as short a time as 16 days--but it requires very strict inhibition or constraint of the pathways which obstruct rehabilitation. (note: 16 days of continuous "immersion" = 16*24 = 384 hours!) In stroke rehabilitation, the neuroplasticity effect is much more pronounced if the unaffected limb is restrained, compelling the brain to optimize improvement in function of the afflicted limb. Here is a recent reference showing rapid brain changes following limb immobilization: http://www.ncbi.nlm.nih.gov/pubmed/22249495
In conclusion, I believe that it is important to have a clear idea about how much time and deliberate, focused effort are needed to change psychological symptoms or problems through therapeutic activities. A little bit of meaningful change could happen with just a few hours of work. In most cases, 100 hours is needed simply to get started with a new skill. 1000 hours is needed to become fluent. And 5000-10000 hours is needed to master something. These times would be much longer still if the periods between practice sessions are regressive. In the case of addictions, eating disorders, self-harm, or OCD, for example, relapses or even fantasies about relapse will substantially prolong the time it takes for any therapeutic effort to help. Of course, it is the nature of these problems to have relapses, or fantasies about relapse--so one should let go of the temptation to feel guilty if there are relapses. But if one is struggling with an addictive problem of this sort, it may help to remind oneself that the brain can change very substantially if one can hold onto to quite a strict behavioural pattern for the hundreds or thousands of hours which are needed.
As a visual reminder of this process, start with an empty transparent bottle, which can hold 250-500 mLof liquid (1-2 cups), and which can be tightly sealed with a small cap. Add one drop of water every time you invest one hour of focused, deliberate therapeutic work. The amount of time you need to spend in therapy depends on your goal. If the goal is total mastery--then you must fill the entire bottle. If simple competence in a new skill is an adequate goal, then you must fill just the cap of the bottle. If there are activities in your day which contradict the therapeutic work, it would be like a little bit of water leaking out of your bottle. So you must also attend to repairing any "leaks." But every hour of your effort counts towards your growth.
Subscribe to:
Posts (Atom)