Here is a link to the abstract of an interesting article by Fowler & Christakis, published in the British Medical Journal in December 2008:
http://www.ncbi.nlm.nih.gov/pubmed/19056788
I think it is a delightful statistical analysis of social networks, based on a cohort of about 5000 people from the Framingham Heart Study, followed over 20 years. This article should really be read in its entirety, in order to appreciate the sophistication of the techniques.
They showed that happiness "spreads" in a manner analogous to contagion. Having happy same-sex friends or neighbours who live nearby, increases one's likelihood of being, or becoming, happy. Interestingly, spouses and coworkers did not have a pronounced effect.
Also, the findings show that having "unhappy" friends does not cause a similar increase in likelihood of being or becoming "unhappy" -- it is happiness, not unhappiness, in the social network, which appears to "spread."
So the message here is not that people should avoid unhappy friends: in fact the message can be that befriending an unhappy person can be helpful not only to that unhappy individual, but to that unhappy person's social network.
There has been some criticism of the authors' techniques, but overall I find the analysis to be very thorough, imaginative, and fascinating.
Here are some practical applications suggested by these findings:
1) sharing positive emotions can have a substantial positive, lasting emotional impact on people near you, including friends and neighbours.
2) nurturing friendships with happier people who live close to you may help to improve subjective happiness
3) this does not mean that friendships with unhappy people have a negative emotional impact, unless all of your friendships are with unhappy people.
4) in the treatment of depression, consideration of the health of social networks can be very important. Here, the "quantity" of the extended social network is not relevant (so the number of "facebook friends" doesn't matter). Rather, the relevant effects are due to the characteristics of the close social network, of 2-6 people or so, particularly those who have close geographic proximity. As I look at the data, I see that having two "happy friends" has a significantly larger positive effect than having only one, but there was not much further effect from having more than two.
5) I have to wonder whether the value of group therapy for depression is diminished if all members of the group are severely depressed. I could see group therapy being much more effective if some of the members were in a recovered, or recovering, state. This reminds me of some of the research about social learning theory (see my previous post: http://garthkroeker.blogspot.com/2008/12/social-learning-therapy.html)
6) on a public health level, the expense involved in treating individual cases of depression should be considered not only on the basis of considering that individual's improved health, function, and well-being, but also on the basis of considering that individual's positive health impact on his or her social network.
7) There is individual variability in social extroversion, or social need. Some individuals prefer a very active social life, others prefer relative social isolation. Others desire social activity, but are isolated or socially anxious. Those who live in relative social isolation might still have a positive reciprocal experience of this social network effect, provided that relationships with people living nearby (such as next-door neighbours or family) are positive.
I should conclude that, despite the strength of the authors' analysis, involving a very large epidemiological cohort, my inferences and proposed applications mentioned above could only really be proven definitively through randomized prospective studies. Yet, such studies would be virtually impossible to do! I think some of the social psychology literature attempts to address this, but I think manages to do so only in a more limited and cross-sectional manner.
a discussion about psychiatry, mental illness, emotional problems, and things that help
Thursday, October 29, 2009
Tuesday, October 27, 2009
Positive Psychology (continued)
This is a response to a reader's comment on my post about positive psychology:
http://garthkroeker.blogspot.com/2009/10/positive-psychotherapy-ppt-for.html
Here's a brief response to some of your points:
1) I don't think there's anything wrong with focusing on pathology or weaknesses. In fact, I consider this type of focus to be essential. Imagine an engineering project in which structural weaknesses or failures were ignored, with a great big smile or a belief that "everything will be fine." Many a disaster has resulted from this kind of approach. I think of the space shuttle disaster, for example.
The insight from positive psychology though, in my opinion, has to do with re-evaluating the balance between a focus on "positivity" vs. pathology.
In depressive states, the cognitive stance is often overwhelmingly critical, about self, world, and future. Even if these views are accurate, they tend to prevent any solution of the problem they describe. It is like an engineering project where the supervisor is so focused on mistakes and criticism that no one can move on, all the workers are tired and demoralized, and perhaps the immediate, relentless focus on errors prevents a different perspective, and a healthy collaboration, which might actually definitively solve the problem.
2) I believe that pronouncements of the "right or wrong" of an emotional or intellectual position are finally up to the individual. It is not for me, or our culture, to judge. There will be all sorts of points of view about the morality or acceptability of any emotional or social stance: some of these points of view will be very critical or judgmental to a given person, some won't. I suppose there are elements of the culture that would harshly judge or criticize someone who appears too "happy": perhaps such a person would be deemed shallow, delusional, uncritical, vain, etc. I prefer to view ideas such as those in "positive psychology" as possible instruments of change, to be tried if a person wishes to try them. CBT, medications, psychoanalysis, surgery, having "negative friends" or "ditching them", etc. are all choices, change behaviours, or ways of managing life, which I think individuals should be free to consider if available, and if legal, but also free to reject if they feel it is not right for them.
In terms of the "gimmicky" nature of positive psychology, I agree. But I think most of the ideas are very simple, and are reflected in other very basic, widely accepted research in biology & behaviour. In widely disparate fields, such as the study of child-rearing, education, coaching, or animal training, it is clear that recognition and criticism of "faults" or "pathologies" is necessary in order for problems to be resolved. Yet the mechanism by which change most optimally occurs is by instilling an atmosphere of warmth, reward, comfort, and joy, with a minority of feedback having to do with criticism. The natural instinct with problematic situations, however, is often to punish. Punishing a child for misbehaviour may at times be necessary, but most times child punishments are excessive and ineffectual, often are more about the emotional state of the punisher rather than the behavioural state of the child, and ironically may reinforce the problems the child is being punished for. Punishing a biting dog through physical injury will teach the dog to be even more aggressive. I find this type of cycle prominent in depressive states: there may be a lot of internal self-criticism (some of which may be accurate), but it leads to harsh self-punishment which ends up perpetuating the depressive state. I find the best insights of "positive psychology" have to do with stepping out of this type of punitive cycle, not by ignoring the negative, but by deliberately trying to nurture and reward the positive as well.
3) The research about so-called "depressive realism" has always seemed quite suspect to me. In a person with PTSD (a disorder which I consider highly analogous to depression and other mental illnesses), very often there is a high degree of sensitivity to various stimuli, that may, for example, cause that person to be able to have better vigilance regarding the potential dangers associated with the sound of footsteps in the distance, or of the smell of smoke, etc. Often times, though, this heightened vigilance comes at great expense to that person's ability to function in life: a pleasant walk, a work environment, or a hug, may instead become a terrifying journey or a place of constant fear of attack.
Similarly, in depressive states, there may be beliefs that are, on one level, accurate, but on another level are causing a profound impairment in life function (e.g. regarding socializing, learning, work, simple life pleasures, spirituality, etc.).
With regard to science, I do not find any need to say that "positive psychology" etc. is about a biased interpretation of data. Instead, my analogy would be along the lines of how one would solve a complex mathematical equation:
-a small minority of mathematical problems have a straightforward answer. If one was to look only at precedents in data, one might conclude that there is no definable answer for many problems. A cynical and depressive approach would be to abandon the problem.
-but most complex problems today require what is called a "numerical analysis" approach. This necessitates basically guessing at the solution, then applying an algorithm that will "sculpt" the guess closer to the true answer. Sometimes the algorithm doesn't work, and the attempted solutions "diverge." But the convergence to a solution through numerical analytical methods is the most powerful phenomenon in modern science. It has permitted most every single major advance in science and engineering in the past hundred years. It is basically analogous to positive behavioural shaping in psychology. It is not about biased interpretation of data, it is about using a set of "positive" tools to solve a problem (in the mathematical case, to get numerical solutions; in the psychological case, to relieve symptoms, to increase freedom of choice, and to expand the realm of possible life functions available).
4) Some of the experiments are weak, no doubt about that. I don't consider experiments evaluating superficial cross-sectional affect to be relevant to therapy research. Experiments which evaluate the change in symptoms and subjective quality of life measures over long periods of time, are most relevant to me. I consider "positive psychology" to be just one more set of ideas that may help to improve quality of life, and overall life function, as subjectively defined by a patient.
In my discussion of this subject, I am not meaning to suggest that so-called "positive psychology" is my favoured therapeutic system. Some of the ideas may be quite off-putting to individuals who may need to deal with a lot of negative symptoms directly before doing "positivity exercises." But I do think that some of the ideas from positive psychology are important and relevant, and deserve to be adopted as part of an eclectic therapy model.
http://garthkroeker.blogspot.com/2009/10/positive-psychotherapy-ppt-for.html
Here's a brief response to some of your points:
1) I don't think there's anything wrong with focusing on pathology or weaknesses. In fact, I consider this type of focus to be essential. Imagine an engineering project in which structural weaknesses or failures were ignored, with a great big smile or a belief that "everything will be fine." Many a disaster has resulted from this kind of approach. I think of the space shuttle disaster, for example.
The insight from positive psychology though, in my opinion, has to do with re-evaluating the balance between a focus on "positivity" vs. pathology.
In depressive states, the cognitive stance is often overwhelmingly critical, about self, world, and future. Even if these views are accurate, they tend to prevent any solution of the problem they describe. It is like an engineering project where the supervisor is so focused on mistakes and criticism that no one can move on, all the workers are tired and demoralized, and perhaps the immediate, relentless focus on errors prevents a different perspective, and a healthy collaboration, which might actually definitively solve the problem.
2) I believe that pronouncements of the "right or wrong" of an emotional or intellectual position are finally up to the individual. It is not for me, or our culture, to judge. There will be all sorts of points of view about the morality or acceptability of any emotional or social stance: some of these points of view will be very critical or judgmental to a given person, some won't. I suppose there are elements of the culture that would harshly judge or criticize someone who appears too "happy": perhaps such a person would be deemed shallow, delusional, uncritical, vain, etc. I prefer to view ideas such as those in "positive psychology" as possible instruments of change, to be tried if a person wishes to try them. CBT, medications, psychoanalysis, surgery, having "negative friends" or "ditching them", etc. are all choices, change behaviours, or ways of managing life, which I think individuals should be free to consider if available, and if legal, but also free to reject if they feel it is not right for them.
In terms of the "gimmicky" nature of positive psychology, I agree. But I think most of the ideas are very simple, and are reflected in other very basic, widely accepted research in biology & behaviour. In widely disparate fields, such as the study of child-rearing, education, coaching, or animal training, it is clear that recognition and criticism of "faults" or "pathologies" is necessary in order for problems to be resolved. Yet the mechanism by which change most optimally occurs is by instilling an atmosphere of warmth, reward, comfort, and joy, with a minority of feedback having to do with criticism. The natural instinct with problematic situations, however, is often to punish. Punishing a child for misbehaviour may at times be necessary, but most times child punishments are excessive and ineffectual, often are more about the emotional state of the punisher rather than the behavioural state of the child, and ironically may reinforce the problems the child is being punished for. Punishing a biting dog through physical injury will teach the dog to be even more aggressive. I find this type of cycle prominent in depressive states: there may be a lot of internal self-criticism (some of which may be accurate), but it leads to harsh self-punishment which ends up perpetuating the depressive state. I find the best insights of "positive psychology" have to do with stepping out of this type of punitive cycle, not by ignoring the negative, but by deliberately trying to nurture and reward the positive as well.
3) The research about so-called "depressive realism" has always seemed quite suspect to me. In a person with PTSD (a disorder which I consider highly analogous to depression and other mental illnesses), very often there is a high degree of sensitivity to various stimuli, that may, for example, cause that person to be able to have better vigilance regarding the potential dangers associated with the sound of footsteps in the distance, or of the smell of smoke, etc. Often times, though, this heightened vigilance comes at great expense to that person's ability to function in life: a pleasant walk, a work environment, or a hug, may instead become a terrifying journey or a place of constant fear of attack.
Similarly, in depressive states, there may be beliefs that are, on one level, accurate, but on another level are causing a profound impairment in life function (e.g. regarding socializing, learning, work, simple life pleasures, spirituality, etc.).
With regard to science, I do not find any need to say that "positive psychology" etc. is about a biased interpretation of data. Instead, my analogy would be along the lines of how one would solve a complex mathematical equation:
-a small minority of mathematical problems have a straightforward answer. If one was to look only at precedents in data, one might conclude that there is no definable answer for many problems. A cynical and depressive approach would be to abandon the problem.
-but most complex problems today require what is called a "numerical analysis" approach. This necessitates basically guessing at the solution, then applying an algorithm that will "sculpt" the guess closer to the true answer. Sometimes the algorithm doesn't work, and the attempted solutions "diverge." But the convergence to a solution through numerical analytical methods is the most powerful phenomenon in modern science. It has permitted most every single major advance in science and engineering in the past hundred years. It is basically analogous to positive behavioural shaping in psychology. It is not about biased interpretation of data, it is about using a set of "positive" tools to solve a problem (in the mathematical case, to get numerical solutions; in the psychological case, to relieve symptoms, to increase freedom of choice, and to expand the realm of possible life functions available).
4) Some of the experiments are weak, no doubt about that. I don't consider experiments evaluating superficial cross-sectional affect to be relevant to therapy research. Experiments which evaluate the change in symptoms and subjective quality of life measures over long periods of time, are most relevant to me. I consider "positive psychology" to be just one more set of ideas that may help to improve quality of life, and overall life function, as subjectively defined by a patient.
In my discussion of this subject, I am not meaning to suggest that so-called "positive psychology" is my favoured therapeutic system. Some of the ideas may be quite off-putting to individuals who may need to deal with a lot of negative symptoms directly before doing "positivity exercises." But I do think that some of the ideas from positive psychology are important and relevant, and deserve to be adopted as part of an eclectic therapy model.
Wednesday, October 21, 2009
Internet, Video Games, and TV: Addictions or Cognitive Enhancers?
I'll introduce this post with my opinion on this issue:
Almost any human activity can be addictive, in a harmful way. That is, the activity could provide a mental reward which leads to the following pattern:
- the activity happens more frequently
- tolerance develops
- increased absorption with the activity develops, in order to achieve the same or greater reward
- other activities feel more boring or unrewarding
- other activities & relationships are neglected
- physical harm may result from sleep deprivation, sedentary behaviour, repetitive strain, reduced self-care, etc.
- social harm may result from relationship neglect or isolation, but also from associating with a cohort of fellow "addicts" who do the same behaviours
- the "mental reward" could probably correlate with functional brain imaging demonstrating increased activity of central dopaminergic reward circuits
Many "good" activities could lead to an addictive pattern. Here's a list of possible activities that can potentially become addictive in this sense:
1) work
2) earning money
3) studying
4) hobbies
5) house chores
6) talking or texting on phones or other electronic devices
7) being in the company of people, or of a particular person
8) sports (playing or watching)
9) reading
10) pursuing excellence
Sometimes, behaviours or thoughts associated with depression or low self-esteem can be "addictive", in that some people may feel a type of masochistic reward from them.
Individuals may not recognize the unhealthy or addictive components of their behaviours. For a person wanting to earn more money, or pursue more excellence, it may seem absurd, and contrary to that person's values, to consider backing away from these pursuits.
For the person "pursuing excellence," it may be true that pouring more time and energy into training might increase achievement in a short-term sense. But this is the addictive trap. In order to pursue excellence in the most effective way, a balanced lifestyle is necessary. In order to achieve that balanced lifestyle, that person may paradoxically need to back away from their immediate pursuit.
I think that all types of modern technology have the potential to be addictive.
Technology and technological culture are changing at an unprecedented pace. And the technologies have ever more powerful and subtle ways to capture our interest, attention, and to stimulate neural reward.
All technological inventions have become addictive for some people. Yet most of these inventions have also contributed to an evolution of modern culture, which has been positive in many ways.
The internet, TV, and video games can all be stimulating, educational activities, which could enhance brain function, intelligence, and could lead to improved social relationships. They could be devices which improve relatedness rather than foster alienation.
Some of these technologies may permit an individual with problems such as a social skills difficulty to explore social connectedness in a different way. In this way, the internet can be an expansion of human connectedness and community. It is a technology which continues the trend of increased potential connectedness through human history. Thousands of years ago, it would have been hard to meet anyone who lived any farther away than the next village. While many individuals would have thrived socially in isolated village culture, some individuals would have been alienated.
Yet technological devices can be easily addictive. And the huge availability of choice in modern technology may permit an individual to find a particular thing that absorbs attention, and disappear into that activity while general physical, social, and mental health deteriorates. There is also a lot of choice available that has violent content, or which creates only an illusion of connection, while none really exists. Facebook or other social connection applications can become preoccupations for many people. While such sites could facilitate social connection, they could also be such a preoccupation that actual social relationships are neglected. The "network" itself could become a meaningless connection of distant acquaintances, yet the preoccupied individual may believe that expanding the network further is a valid solution to this problem. This is not unlike various neurotic social behaviours that exist outside of modern technology: people have always had collections of social behaviours which they believed to be useful, but in fact caused increased social distance & loneliness (e.g. vain behaviours, talking a lot without listening, etc.).
The thing that I believe distinguishes addictions to modern technology from other types of addiction is that many individuals are unquestioningly adopting the technologies as major parts of their daily lives, without being aware of the addictive potential, and without maintaining balance in other parts of life. While everything in life can be addictive, we have a greater understanding of non-technological addiction, since these phenomena have developed more slowly over past decades or centuries. New technology is changing personal culture so rapidly that we may have little chance to understand the risks before the addictiveness is quite entrenched in many people.
So, in conclusion, I do not believe that modern technology, including internet, TV, or video games, are necessarily "bad." They may in fact be wonderful, life-enhancing joys which improve happiness, culture, relationships, and connectedness. Yet they have a high risk to be addictive. I do not believe most people understand the degree of risk involved. I encourage people, in the meantime, to choose wisely when using technology, or when doing supposedly "good" activities such as those listed above, perhaps using the following questions:
1) am I doing this just out of a habit, because of boredom, or as part of procrastinating?
2) is this activity enhancing my life, or is it just gobbling up some of my time and attention?
3) is this activity improving my community, or is it distracting energy away from healthy community?
4) is this activity causing me physical harm, due to lack of exercise, or physical overuse?
5) is this activity consistent with my core values?
6) if it is consistent, is it really helping realize those core values?
7) is the activity itself causing my core values to change in an unwelcome way?
8) is the activity distracting energy or time away from other activities (such as learning, developing a talent, practicing a creative art, developing social relationships) which are important to personal culture?
9) do I have boundaries around this activity, in terms of time & energy, that protect my health?
References & Further Reading:
http://www.ncbi.nlm.nih.gov/pubmed/19818048
{this is a 2009 study by Kira Bailey et al., giving a good review of data concerning video gaming & cognitive variables; they discuss their own study, which leads to the following conclusion:
"these data may indicate that the video game experience is associated with a decrease in the efficiency of proactive cognitive control that supports one’s ability to maintain goal-directed action when the environment is not intrinsically engaging." In other words, video gaming may lead to an ADHD-like phenomenon}
http://www.ncbi.nlm.nih.gov/pubmed/18506602
{a useful review of the subject of technological advancements, in this case specifically regarding gambling technology, looking at whether these advancements constitute increased addictive risk, and if technology to reduce addictive risk is effective. The promise is that the technology itself could evolve--if it is the will of individuals and manufacturers to permit this evolution--to become safer, healthier, and less prone to foster addictive behaviour}
http://www.ncbi.nlm.nih.gov/pubmed/19805713
{this 2-year prospective study of adolescents shows that ADHD, depression, social phobia & hostility symptoms are risk factors for developing internet addiction}
http://www.ncbi.nlm.nih.gov/pubmed/19701792
{one of many associational studies correlating negative mood & internet/gaming addiction; unfortunately, associational studies are very weak, and do not really answer the question for us of how internet/gaming affects people, since we do not see the directions or strengths of causation}
http://www.ncbi.nlm.nih.gov/pubmed/19490510
{a study showing a strong association between addictive internet use and excessive daytime sleepiness}
http://www.ncbi.nlm.nih.gov/pubmed/16634979
{a study associating TV & computer use with sedentary behavior in 5-year-olds}
http://www.ncbi.nlm.nih.gov/pubmed/19428410
{one of the studies showing enhanced visual attentional skills in video gamers. But I find this a severely limited study which should not be over-interpreted--basically it shows that if you play video games, you become more skilled at a visual attention test that resembles the video games you've been playing. It says nothing about general intelligence, social skills, verbal aptitude, etc. which may well have atrophied in the video gamers}
http://www.ncbi.nlm.nih.gov/pubmed/18929349
{a more extensive analysis of cognitive skills in relation to video game playing. But, astonishingly, no cognitive tests were given to assess verbal skills, social skills, etc.; rather the tests were all related to things that seemed to me quite similar to video game tasks--so it is no surprise that the video gamers performed modestly better on some of these! No surprise that playing 1000 hours of Tetris probably will help you mentally rotate 3-d shapes more easily! But at what cost to other social, emotional, and intellectual skills? We need to have prospective studies that do very broad cognitive and psychological evaluations following prolonged exposure to different types of video games. The evaluations must include assessments of emotional state, verbal & non-verbal attention, memory, and reasoning; and they should include assessments of "social intelligence" such as establishing appropriate social communication, empathy, recognition of emotions, etc.}
http://www.ncbi.nlm.nih.gov/pubmed/19016226
{a 30-month longitudinal study showing increased aggression and hostile attribution bias in those exposed to violent video games}
http://www.ncbi.nlm.nih.gov/pubmed/19127289
{here's a description of an interesting psychotherapeutic application for a video game: in this study, those who played Tetris after watching a disturbing film had fewer flashback symptoms afterwards; it may encourage a tactic of treating those who have recently experienced a traumatic event with cognitive distraction, in order to reduce involuntary intrusive emotional memory of the trauma, and therefore to reduce the chance of developing PTSD. The deliberate, voluntary memory of the traumatic scene was unaffected.}
http://www.ncbi.nlm.nih.gov/pubmed/16972829
{an example of using video games to reduce pre-operative anxiety in young children. This sounds like a great idea, which could improve comfort while minimizing medication use in this type of situation.}
http://www.liebertpub.com/products/product.aspx?pid=10
{this is a link to a fairly new journal called "CyberPsychology & Behavior", which looks interesting and pertinent}
Almost any human activity can be addictive, in a harmful way. That is, the activity could provide a mental reward which leads to the following pattern:
- the activity happens more frequently
- tolerance develops
- increased absorption with the activity develops, in order to achieve the same or greater reward
- other activities feel more boring or unrewarding
- other activities & relationships are neglected
- physical harm may result from sleep deprivation, sedentary behaviour, repetitive strain, reduced self-care, etc.
- social harm may result from relationship neglect or isolation, but also from associating with a cohort of fellow "addicts" who do the same behaviours
- the "mental reward" could probably correlate with functional brain imaging demonstrating increased activity of central dopaminergic reward circuits
Many "good" activities could lead to an addictive pattern. Here's a list of possible activities that can potentially become addictive in this sense:
1) work
2) earning money
3) studying
4) hobbies
5) house chores
6) talking or texting on phones or other electronic devices
7) being in the company of people, or of a particular person
8) sports (playing or watching)
9) reading
10) pursuing excellence
Sometimes, behaviours or thoughts associated with depression or low self-esteem can be "addictive", in that some people may feel a type of masochistic reward from them.
Individuals may not recognize the unhealthy or addictive components of their behaviours. For a person wanting to earn more money, or pursue more excellence, it may seem absurd, and contrary to that person's values, to consider backing away from these pursuits.
For the person "pursuing excellence," it may be true that pouring more time and energy into training might increase achievement in a short-term sense. But this is the addictive trap. In order to pursue excellence in the most effective way, a balanced lifestyle is necessary. In order to achieve that balanced lifestyle, that person may paradoxically need to back away from their immediate pursuit.
I think that all types of modern technology have the potential to be addictive.
Technology and technological culture are changing at an unprecedented pace. And the technologies have ever more powerful and subtle ways to capture our interest, attention, and to stimulate neural reward.
All technological inventions have become addictive for some people. Yet most of these inventions have also contributed to an evolution of modern culture, which has been positive in many ways.
The internet, TV, and video games can all be stimulating, educational activities, which could enhance brain function, intelligence, and could lead to improved social relationships. They could be devices which improve relatedness rather than foster alienation.
Some of these technologies may permit an individual with problems such as a social skills difficulty to explore social connectedness in a different way. In this way, the internet can be an expansion of human connectedness and community. It is a technology which continues the trend of increased potential connectedness through human history. Thousands of years ago, it would have been hard to meet anyone who lived any farther away than the next village. While many individuals would have thrived socially in isolated village culture, some individuals would have been alienated.
Yet technological devices can be easily addictive. And the huge availability of choice in modern technology may permit an individual to find a particular thing that absorbs attention, and disappear into that activity while general physical, social, and mental health deteriorates. There is also a lot of choice available that has violent content, or which creates only an illusion of connection, while none really exists. Facebook or other social connection applications can become preoccupations for many people. While such sites could facilitate social connection, they could also be such a preoccupation that actual social relationships are neglected. The "network" itself could become a meaningless connection of distant acquaintances, yet the preoccupied individual may believe that expanding the network further is a valid solution to this problem. This is not unlike various neurotic social behaviours that exist outside of modern technology: people have always had collections of social behaviours which they believed to be useful, but in fact caused increased social distance & loneliness (e.g. vain behaviours, talking a lot without listening, etc.).
The thing that I believe distinguishes addictions to modern technology from other types of addiction is that many individuals are unquestioningly adopting the technologies as major parts of their daily lives, without being aware of the addictive potential, and without maintaining balance in other parts of life. While everything in life can be addictive, we have a greater understanding of non-technological addiction, since these phenomena have developed more slowly over past decades or centuries. New technology is changing personal culture so rapidly that we may have little chance to understand the risks before the addictiveness is quite entrenched in many people.
So, in conclusion, I do not believe that modern technology, including internet, TV, or video games, are necessarily "bad." They may in fact be wonderful, life-enhancing joys which improve happiness, culture, relationships, and connectedness. Yet they have a high risk to be addictive. I do not believe most people understand the degree of risk involved. I encourage people, in the meantime, to choose wisely when using technology, or when doing supposedly "good" activities such as those listed above, perhaps using the following questions:
1) am I doing this just out of a habit, because of boredom, or as part of procrastinating?
2) is this activity enhancing my life, or is it just gobbling up some of my time and attention?
3) is this activity improving my community, or is it distracting energy away from healthy community?
4) is this activity causing me physical harm, due to lack of exercise, or physical overuse?
5) is this activity consistent with my core values?
6) if it is consistent, is it really helping realize those core values?
7) is the activity itself causing my core values to change in an unwelcome way?
8) is the activity distracting energy or time away from other activities (such as learning, developing a talent, practicing a creative art, developing social relationships) which are important to personal culture?
9) do I have boundaries around this activity, in terms of time & energy, that protect my health?
References & Further Reading:
http://www.ncbi.nlm.nih.gov/pubmed/19818048
{this is a 2009 study by Kira Bailey et al., giving a good review of data concerning video gaming & cognitive variables; they discuss their own study, which leads to the following conclusion:
"these data may indicate that the video game experience is associated with a decrease in the efficiency of proactive cognitive control that supports one’s ability to maintain goal-directed action when the environment is not intrinsically engaging." In other words, video gaming may lead to an ADHD-like phenomenon}
http://www.ncbi.nlm.nih.gov/pubmed/18506602
{a useful review of the subject of technological advancements, in this case specifically regarding gambling technology, looking at whether these advancements constitute increased addictive risk, and if technology to reduce addictive risk is effective. The promise is that the technology itself could evolve--if it is the will of individuals and manufacturers to permit this evolution--to become safer, healthier, and less prone to foster addictive behaviour}
http://www.ncbi.nlm.nih.gov/pubmed/19805713
{this 2-year prospective study of adolescents shows that ADHD, depression, social phobia & hostility symptoms are risk factors for developing internet addiction}
http://www.ncbi.nlm.nih.gov/pubmed/19701792
{one of many associational studies correlating negative mood & internet/gaming addiction; unfortunately, associational studies are very weak, and do not really answer the question for us of how internet/gaming affects people, since we do not see the directions or strengths of causation}
http://www.ncbi.nlm.nih.gov/pubmed/19490510
{a study showing a strong association between addictive internet use and excessive daytime sleepiness}
http://www.ncbi.nlm.nih.gov/pubmed/16634979
{a study associating TV & computer use with sedentary behavior in 5-year-olds}
http://www.ncbi.nlm.nih.gov/pubmed/19428410
{one of the studies showing enhanced visual attentional skills in video gamers. But I find this a severely limited study which should not be over-interpreted--basically it shows that if you play video games, you become more skilled at a visual attention test that resembles the video games you've been playing. It says nothing about general intelligence, social skills, verbal aptitude, etc. which may well have atrophied in the video gamers}
http://www.ncbi.nlm.nih.gov/pubmed/18929349
{a more extensive analysis of cognitive skills in relation to video game playing. But, astonishingly, no cognitive tests were given to assess verbal skills, social skills, etc.; rather the tests were all related to things that seemed to me quite similar to video game tasks--so it is no surprise that the video gamers performed modestly better on some of these! No surprise that playing 1000 hours of Tetris probably will help you mentally rotate 3-d shapes more easily! But at what cost to other social, emotional, and intellectual skills? We need to have prospective studies that do very broad cognitive and psychological evaluations following prolonged exposure to different types of video games. The evaluations must include assessments of emotional state, verbal & non-verbal attention, memory, and reasoning; and they should include assessments of "social intelligence" such as establishing appropriate social communication, empathy, recognition of emotions, etc.}
http://www.ncbi.nlm.nih.gov/pubmed/19016226
{a 30-month longitudinal study showing increased aggression and hostile attribution bias in those exposed to violent video games}
http://www.ncbi.nlm.nih.gov/pubmed/19127289
{here's a description of an interesting psychotherapeutic application for a video game: in this study, those who played Tetris after watching a disturbing film had fewer flashback symptoms afterwards; it may encourage a tactic of treating those who have recently experienced a traumatic event with cognitive distraction, in order to reduce involuntary intrusive emotional memory of the trauma, and therefore to reduce the chance of developing PTSD. The deliberate, voluntary memory of the traumatic scene was unaffected.}
http://www.ncbi.nlm.nih.gov/pubmed/16972829
{an example of using video games to reduce pre-operative anxiety in young children. This sounds like a great idea, which could improve comfort while minimizing medication use in this type of situation.}
http://www.liebertpub.com/products/product.aspx?pid=10
{this is a link to a fairly new journal called "CyberPsychology & Behavior", which looks interesting and pertinent}
Tuesday, October 20, 2009
"Positive Psychotherapy" (PPT) for depression
This post is a continuation of my earlier post on the psychology of happiness. I'm trying to look at each of the references in more detail.
PPT (positive psychotherapy) is a technique described in a paper by Seligman et al. Here's a reference, from American Psychologist in 2006:
http://www.ncbi.nlm.nih.gov/pubmed/17115810
In this paper the technique was tested on two groups. The more important finding concerns the application of PPT with severely depressed adults. PPT was compared with "treatment as usual" (mainly supportive therapy), and "treatment as usual plus antidepressant". The trial lasted 12 weeks, and there was follow-up over 1 year.
The PPT group showed significant improvement in depression scores, and significantly increased happiness, compared to the two control groups.
More controlled studies need to be done on the technique, but in the meantime, the ideas are simple, valuable, potentially enjoyable, and easily incorporated into other therapy styles such as CBT. Here are some of the exercises recommended in PPT, as described in the paper mentioned above:
1) Write a 300-word positive autobiographical introduction, which includes a concrete story illustrating character strengths
2) Identify "signature strengths" based on exercise (1), and discuss situations in which these have helped. Consider ways to use these strengths more in daily life
3) Write a journal describing 3 good things (large or small) that happen each day
4) Describe 3 bad memories, associated anger, and their impact on maintaining depression (this exercise to be done just once or a few times, not every day)
5) Write a letter of forgiveness describing a transgression from the past, with a pledge to forgive (the letter need not be actually sent)
6) Write a letter of gratitude to someone who was never properly thanked
7) Avoiding an attitude of "maximizing" as a goal, rather focusing on meaningfully engaging with what is enough (i.e. avoiding addictive hedonism, in terms of materialism or achievement). The authors use the term "satisficing", which led me to look this word up--here's a good article I found: http://en.wikipedia.org/wiki/Satisficing). I think this idea is really important for those of us who are very perfectionistic or who have very specific, fixed standards for the way they believe life should be, and who therefore feel that real life is always lagging behind these expectations or requirements, or that real life could at any moment crash into a state of failure.
8) Identification of 3 negative life events ("doors closed") which led to 3 positives ("doors opened").
9) Identification of the "signature strengths" of a significant other.
10) Give enthusiastic positive feedback to positive events reported by others, at least once per day
11) Arrange a date to celebrate the strengths of oneself and of a significant other
12) Analyze "signature strengths" among family members
13) Plan and engage with a "savoring" activity, in which something pleasurable is done, with conscious attention given to how pleasurable it is, and with plenty of time reserved to do it
14) "Giving a gift of time" by contributing to another person, or to the community, a substantial amount of time, using one of your signature strengths. This could include volunteering.
Here's a link to a blog devoted to positive psychology techniques:
http://blog.happier.com/
This blog is connected to a site in which they want you to sign up and pay for a membership. I'm always a bit jarred when an altruistic psychotherapeutic system is marketed for financial profit. Would it not be more satisfying to everyone to offer this for free? Also I think the photograph of an ecstatic woman in a flowery meadow is a bit over-the-top as advertising for the site. I find the marketing excessively aggressive, it looks like an infomercial. Some of this stuff could really be off-putting to weary, understandably cynical individuals with chronic depression who have tried many other types of therapy already. And there can be a sort of religious fervor among enthusiastic adherents of a new technique, which can skew reason.
Yet, these ideas are worth looking at. And I certainly agree that in psychiatry, and in therapy, we often focus excessively on the negative side of things, and do not attend enough to nurturing the positive.
PPT (positive psychotherapy) is a technique described in a paper by Seligman et al. Here's a reference, from American Psychologist in 2006:
http://www.ncbi.nlm.nih.gov/pubmed/17115810
In this paper the technique was tested on two groups. The more important finding concerns the application of PPT with severely depressed adults. PPT was compared with "treatment as usual" (mainly supportive therapy), and "treatment as usual plus antidepressant". The trial lasted 12 weeks, and there was follow-up over 1 year.
The PPT group showed significant improvement in depression scores, and significantly increased happiness, compared to the two control groups.
More controlled studies need to be done on the technique, but in the meantime, the ideas are simple, valuable, potentially enjoyable, and easily incorporated into other therapy styles such as CBT. Here are some of the exercises recommended in PPT, as described in the paper mentioned above:
1) Write a 300-word positive autobiographical introduction, which includes a concrete story illustrating character strengths
2) Identify "signature strengths" based on exercise (1), and discuss situations in which these have helped. Consider ways to use these strengths more in daily life
3) Write a journal describing 3 good things (large or small) that happen each day
4) Describe 3 bad memories, associated anger, and their impact on maintaining depression (this exercise to be done just once or a few times, not every day)
5) Write a letter of forgiveness describing a transgression from the past, with a pledge to forgive (the letter need not be actually sent)
6) Write a letter of gratitude to someone who was never properly thanked
7) Avoiding an attitude of "maximizing" as a goal, rather focusing on meaningfully engaging with what is enough (i.e. avoiding addictive hedonism, in terms of materialism or achievement). The authors use the term "satisficing", which led me to look this word up--here's a good article I found: http://en.wikipedia.org/wiki/Satisficing). I think this idea is really important for those of us who are very perfectionistic or who have very specific, fixed standards for the way they believe life should be, and who therefore feel that real life is always lagging behind these expectations or requirements, or that real life could at any moment crash into a state of failure.
8) Identification of 3 negative life events ("doors closed") which led to 3 positives ("doors opened").
9) Identification of the "signature strengths" of a significant other.
10) Give enthusiastic positive feedback to positive events reported by others, at least once per day
11) Arrange a date to celebrate the strengths of oneself and of a significant other
12) Analyze "signature strengths" among family members
13) Plan and engage with a "savoring" activity, in which something pleasurable is done, with conscious attention given to how pleasurable it is, and with plenty of time reserved to do it
14) "Giving a gift of time" by contributing to another person, or to the community, a substantial amount of time, using one of your signature strengths. This could include volunteering.
Here's a link to a blog devoted to positive psychology techniques:
http://blog.happier.com/
This blog is connected to a site in which they want you to sign up and pay for a membership. I'm always a bit jarred when an altruistic psychotherapeutic system is marketed for financial profit. Would it not be more satisfying to everyone to offer this for free? Also I think the photograph of an ecstatic woman in a flowery meadow is a bit over-the-top as advertising for the site. I find the marketing excessively aggressive, it looks like an infomercial. Some of this stuff could really be off-putting to weary, understandably cynical individuals with chronic depression who have tried many other types of therapy already. And there can be a sort of religious fervor among enthusiastic adherents of a new technique, which can skew reason.
Yet, these ideas are worth looking at. And I certainly agree that in psychiatry, and in therapy, we often focus excessively on the negative side of things, and do not attend enough to nurturing the positive.
Mindfulness actually works
So-called "mindfulness" techniques have been recommended in the treatment of a variety of problems, including chronic physical pain, emotional lability, anxiety, borderline personality symptoms, etc.
I do not think mindfulness training is a complete answer to any of these complex problems, but it could be an extremely valuable, essential component in therapy and growth.
I think now of a metaphor of a growing seedling, or a baby bird: these creatures require stable environments in order to grow. Internal and external environments may not always be stable, though. This instability may be caused by many internal and external biological, environmental, social, or psychological factors. In an unstable environment, growth cannot occur--it gets disrupted, uprooted, or drowned, over and over again, by painful waves of symptoms. Mindfulness techniques can be a way to deal with this type of pain, by taking away from the pain its power to disrupt, uproot, or drown. In itself it may not lead to psychological health, but it may permit a stable ground on which to start growing and building health.
Mindfulness on its own may not always stop pain, but it may lay the groundwork for an environment in which the causes of the pain may finally be dealt with and relieved. In this way mindfulness can be more a catalyst for change than a force of change.
Here is some research evidence:
http://www.ncbi.nlm.nih.gov/pubmed/1609875
http://www.ncbi.nlm.nih.gov/pubmed/7649463
This is a link to two of Kabat-Zinn's papers: the first describes the results of an 8-week mindfulness meditation course on anxiety symptoms in a cohort of 22 patients, and the second describes a 3-year follow-up on these same patients. The results show persistent, substantial reductions in all anxiety symptoms. The studies are weakened by the lack of placebo groups and randomization. But the initial cohort had quite chronic and severe anxiety symptoms (of average duration 6.8 years). Symptom scores declined by about 50%, which is very significant for chronic anxiety disorder patients, and represent a radical improvement in quality of life.
These papers suggest that mindfulness does not merely "increase acceptance of pain"--they suggest that mindfulness also leads to direct reduction of symptoms.
http://www.ncbi.nlm.nih.gov/pubmed/3897551
This is a link to one of Kabat-Zinn's original papers showing substantial symptom improvement and quality-of-life improvement in 90 chronic pain patients who did a 10-week mindfulness meditation course.
http://www.ncbi.nlm.nih.gov/pubmed/15256293
This is a 2004 meta-analysis concluding that mindfulness training, for a variety of different syndromes of emotional or physical pain, has an average effect size of about 0.5, which strongly suggests a very significant clinical benefit. It does come from a potentially biased source, "the Freiburg Institute for Mindfulness Research." But the study itself appears to be well put-together.
http://www.ncbi.nlm.nih.gov/pubmed/17544212
This randomized, controlled 8 week study showed slight improvements in various symptoms among elderly subjects with chronic low back pain. Pain scores (i.e. quantified measures of subjective pain) did not actually change significantly. And quality of life scores didn't change very much either. So I think the results of this study should not be overstated.
I do think that 8 weeks is too short. Also the degree of "immersion" for a technique like this is likely to be an extremely important factor. I think 8 weeks of 6 hours per day would be much more effective. Or a 1-year study of 1-hour per day. Techniques such as meditation are similar to learning languages or musical skills, and these types of abilities require much more lengthy, immersive practice in order to develop.
In the meantime, I encourage people to inform themselves about mindfulness techniques, and consider reserving some time to develop mindfulness skills.
I do not think mindfulness training is a complete answer to any of these complex problems, but it could be an extremely valuable, essential component in therapy and growth.
I think now of a metaphor of a growing seedling, or a baby bird: these creatures require stable environments in order to grow. Internal and external environments may not always be stable, though. This instability may be caused by many internal and external biological, environmental, social, or psychological factors. In an unstable environment, growth cannot occur--it gets disrupted, uprooted, or drowned, over and over again, by painful waves of symptoms. Mindfulness techniques can be a way to deal with this type of pain, by taking away from the pain its power to disrupt, uproot, or drown. In itself it may not lead to psychological health, but it may permit a stable ground on which to start growing and building health.
Mindfulness on its own may not always stop pain, but it may lay the groundwork for an environment in which the causes of the pain may finally be dealt with and relieved. In this way mindfulness can be more a catalyst for change than a force of change.
Here is some research evidence:
http://www.ncbi.nlm.nih.gov/pubmed/1609875
http://www.ncbi.nlm.nih.gov/pubmed/7649463
This is a link to two of Kabat-Zinn's papers: the first describes the results of an 8-week mindfulness meditation course on anxiety symptoms in a cohort of 22 patients, and the second describes a 3-year follow-up on these same patients. The results show persistent, substantial reductions in all anxiety symptoms. The studies are weakened by the lack of placebo groups and randomization. But the initial cohort had quite chronic and severe anxiety symptoms (of average duration 6.8 years). Symptom scores declined by about 50%, which is very significant for chronic anxiety disorder patients, and represent a radical improvement in quality of life.
These papers suggest that mindfulness does not merely "increase acceptance of pain"--they suggest that mindfulness also leads to direct reduction of symptoms.
http://www.ncbi.nlm.nih.gov/pubmed/3897551
This is a link to one of Kabat-Zinn's original papers showing substantial symptom improvement and quality-of-life improvement in 90 chronic pain patients who did a 10-week mindfulness meditation course.
http://www.ncbi.nlm.nih.gov/pubmed/15256293
This is a 2004 meta-analysis concluding that mindfulness training, for a variety of different syndromes of emotional or physical pain, has an average effect size of about 0.5, which strongly suggests a very significant clinical benefit. It does come from a potentially biased source, "the Freiburg Institute for Mindfulness Research." But the study itself appears to be well put-together.
http://www.ncbi.nlm.nih.gov/pubmed/17544212
This randomized, controlled 8 week study showed slight improvements in various symptoms among elderly subjects with chronic low back pain. Pain scores (i.e. quantified measures of subjective pain) did not actually change significantly. And quality of life scores didn't change very much either. So I think the results of this study should not be overstated.
I do think that 8 weeks is too short. Also the degree of "immersion" for a technique like this is likely to be an extremely important factor. I think 8 weeks of 6 hours per day would be much more effective. Or a 1-year study of 1-hour per day. Techniques such as meditation are similar to learning languages or musical skills, and these types of abilities require much more lengthy, immersive practice in order to develop.
In the meantime, I encourage people to inform themselves about mindfulness techniques, and consider reserving some time to develop mindfulness skills.
Labels:
Anxiety,
Depression,
Metaphors,
Personality Disorders
Monday, October 19, 2009
The Importance of Two-Sided Arguments
This is a topic I was meaning to write a post about for some time. I encountered this topic while doing some social psychology reading last year, and it touches upon a lot of other posts I've written, having to do with decision-making and persuasion. It touches on the huge issue of bias which appears in so much of the medical and health literature.
Here is what some of the social psychology research has to say on this:
1) If someone already agrees on an issue, then a one-sided appeal is most effective. So, for example, if I happen to recommend a particular brand of toothpaste, or a particular political candidate, and I simply give a list of reasons why my particular recommendation is best, then I am usually "preaching to the converted." Perhaps more people will go out to buy that toothpaste brand, or vote for that candidate, but they would mostly be people who would have made those choices anyway. The only others who would be most persuaded by my advice would be those who do not have a strong personal investment or attachment to the issue.
2) If people are already aware of opposing arguments, a two-sided presentation is more persuasive and enduring. And if people disagree with a certain issue, a two-sided presentation is more persuasive to change their minds. People are likely to dismiss as biased a one-sided presentation which disagrees with their point of view, even if the presentation contains accurate and well-organized information. This is one of my complaints about various types of media and documentary styles: sometimes there is an overt left-wing or right-wing political bias that is immediately apparent, particularly to a person holding the opposing stance. I can think of numerous examples in local and international newspapers and television. The information from such media or documentary presentations would therefore have little educational or persuasive impact except with individuals who probably agree with the information and the point of view in advance. The strongest documentary or journalistic style has to be one which presents both sides of a debate, otherwise it is probably almost worthless to effect meaningful change--in fact it could entrench the points of view of opposing camps.
It has also been found that if people are already committed to a certain belief or position, than a mild attack or challenge of this position causes people to strengthen their initial position. Ineffective persuasion may "inoculate" people attitudinally, causing them to be more committed to their initial positions. In an educational sense, children could be "inoculated" against negative persuasion, such as from television ads or peer pressure to smoke, etc. by exploring, analyzing, and discussing such persuasive tactics, with parents or teachers.
However, such "inoculation" may be an instrument of attitudinal entrenchment and stubbornness: a person who has anticipated arguments against his or her committed position is more likely to hold that position more tenaciously. Or an individual who has been taught a delusional belief system may have been taught the various challenges to the belief system to expect: this may "inoculate" the person against challenging this belief system, and cause the delusions to become more entrenched.
An adversarial justice system reminds me to some degree of an efficient process, from a psychological point of view, to seek the least biased truth. However, the problem here is that both sides "inoculate" themselves against the evidence presented by the other. The opposing camps do not seek "resolution"--they seek to win, which is quite different. Also, the prosecution and the defense do not EACH present a balanced analysis of pro & con regarding their cases. There is information possibly withheld--the defense may truly know the guilt of the accused, yet this may not be shared openly in court. Presumably the prosecution would not prosecute if the innocence of the accused was known for sure.
Here are some applications of these ideas, which I think are relevant in psychiatry:
1) Depression, anxiety, and other types of mental illness, tend to feature entrenched thinking. Thoughts which are very negative, hostile, or pessimistic--about self, world, or future--may have been consolidated over a period of years or decades, often reinforced by negative experiences. In this setting, one-sided optimistic advice--even if accurate-- could be very counterproductive. It could further entrench the depressive cognitive stance. Standard "Burns style" cognitive therapy can also be excessively "rosy", in my opinion, and may be very ineffective for similar reasons. I think of the smiling picture of the author on the cover of a cognitive therapy workbook as an instant turn-off (for many) which would understandably strengthen the consolidation of many chronic depressive thoughts.
But I do think that a cognitive therapy approach could be very helpful, provided it includes the depressive or negative thinking in an honest, thorough, systematic debate or dialectic. That is, the work has to involve "two-sided argument".
2) In medical literature, there is a great deal of bias going on. Many of my previous postings have been about this. On other internet sites, there are various points of view, some of which are quite extreme. Those sites which are invariably about "pharmaceutical industry bias", etc. I think are actually quite ineffectual, if they merely are covering the same theme, over and over again. They are likely to be sites which are "preaching to the converted", and are likely to be viewed as themselves biased or extreme by someone looking for balanced advice. They may cause individuals with an already biased point of view to unreasonably entrench their positions further.
Also, I suspect the authors of sites like this, may themselves have become quite biased. If their site has repeatedly criticized the inadequacy of the research data about some drug intended to treat depression or bipolar disorder, etc., they may be less likely to consider or publish contrary evidence that the drug actually works. Once we commit ourselves to a position, we all have a tendency to cling to that position, even when evidence should sway us.
On the other hand, if there is a site which consistently gives medication advice of one sort or the other, I think it is unlikely to change very many opinions on this issue, except among those who are already trying out different medications.
So, in my opinion, it is a healthy practice when analyzing issues, including health care decisions, to carefully consider both sides of an argument. If the issue has to do with a treatment, including a medication, a style of psychotherapy, an alternative health care modality, or of doing nothing at all, then I encourage the habit of analyzing the evidence in two ways:
1) gather all evidence which supports the modality
2) gather all evidence which opposes it
Then I encourage a weighing, and a synthesis, of these points of view, before making a decision.
I think that this is the most reliable way to minimize biases. If such a system is applied to one's own attitudes, thoughts, values, and behaviours, I think it is the most effective to promote change and growth.
References:
Myers, David. Social Psychology, fourth edition. New York: McGraw-Hill; 1993. p. 275; 294-297.
Here is what some of the social psychology research has to say on this:
1) If someone already agrees on an issue, then a one-sided appeal is most effective. So, for example, if I happen to recommend a particular brand of toothpaste, or a particular political candidate, and I simply give a list of reasons why my particular recommendation is best, then I am usually "preaching to the converted." Perhaps more people will go out to buy that toothpaste brand, or vote for that candidate, but they would mostly be people who would have made those choices anyway. The only others who would be most persuaded by my advice would be those who do not have a strong personal investment or attachment to the issue.
2) If people are already aware of opposing arguments, a two-sided presentation is more persuasive and enduring. And if people disagree with a certain issue, a two-sided presentation is more persuasive to change their minds. People are likely to dismiss as biased a one-sided presentation which disagrees with their point of view, even if the presentation contains accurate and well-organized information. This is one of my complaints about various types of media and documentary styles: sometimes there is an overt left-wing or right-wing political bias that is immediately apparent, particularly to a person holding the opposing stance. I can think of numerous examples in local and international newspapers and television. The information from such media or documentary presentations would therefore have little educational or persuasive impact except with individuals who probably agree with the information and the point of view in advance. The strongest documentary or journalistic style has to be one which presents both sides of a debate, otherwise it is probably almost worthless to effect meaningful change--in fact it could entrench the points of view of opposing camps.
It has also been found that if people are already committed to a certain belief or position, than a mild attack or challenge of this position causes people to strengthen their initial position. Ineffective persuasion may "inoculate" people attitudinally, causing them to be more committed to their initial positions. In an educational sense, children could be "inoculated" against negative persuasion, such as from television ads or peer pressure to smoke, etc. by exploring, analyzing, and discussing such persuasive tactics, with parents or teachers.
However, such "inoculation" may be an instrument of attitudinal entrenchment and stubbornness: a person who has anticipated arguments against his or her committed position is more likely to hold that position more tenaciously. Or an individual who has been taught a delusional belief system may have been taught the various challenges to the belief system to expect: this may "inoculate" the person against challenging this belief system, and cause the delusions to become more entrenched.
An adversarial justice system reminds me to some degree of an efficient process, from a psychological point of view, to seek the least biased truth. However, the problem here is that both sides "inoculate" themselves against the evidence presented by the other. The opposing camps do not seek "resolution"--they seek to win, which is quite different. Also, the prosecution and the defense do not EACH present a balanced analysis of pro & con regarding their cases. There is information possibly withheld--the defense may truly know the guilt of the accused, yet this may not be shared openly in court. Presumably the prosecution would not prosecute if the innocence of the accused was known for sure.
Here are some applications of these ideas, which I think are relevant in psychiatry:
1) Depression, anxiety, and other types of mental illness, tend to feature entrenched thinking. Thoughts which are very negative, hostile, or pessimistic--about self, world, or future--may have been consolidated over a period of years or decades, often reinforced by negative experiences. In this setting, one-sided optimistic advice--even if accurate-- could be very counterproductive. It could further entrench the depressive cognitive stance. Standard "Burns style" cognitive therapy can also be excessively "rosy", in my opinion, and may be very ineffective for similar reasons. I think of the smiling picture of the author on the cover of a cognitive therapy workbook as an instant turn-off (for many) which would understandably strengthen the consolidation of many chronic depressive thoughts.
But I do think that a cognitive therapy approach could be very helpful, provided it includes the depressive or negative thinking in an honest, thorough, systematic debate or dialectic. That is, the work has to involve "two-sided argument".
2) In medical literature, there is a great deal of bias going on. Many of my previous postings have been about this. On other internet sites, there are various points of view, some of which are quite extreme. Those sites which are invariably about "pharmaceutical industry bias", etc. I think are actually quite ineffectual, if they merely are covering the same theme, over and over again. They are likely to be sites which are "preaching to the converted", and are likely to be viewed as themselves biased or extreme by someone looking for balanced advice. They may cause individuals with an already biased point of view to unreasonably entrench their positions further.
Also, I suspect the authors of sites like this, may themselves have become quite biased. If their site has repeatedly criticized the inadequacy of the research data about some drug intended to treat depression or bipolar disorder, etc., they may be less likely to consider or publish contrary evidence that the drug actually works. Once we commit ourselves to a position, we all have a tendency to cling to that position, even when evidence should sway us.
On the other hand, if there is a site which consistently gives medication advice of one sort or the other, I think it is unlikely to change very many opinions on this issue, except among those who are already trying out different medications.
So, in my opinion, it is a healthy practice when analyzing issues, including health care decisions, to carefully consider both sides of an argument. If the issue has to do with a treatment, including a medication, a style of psychotherapy, an alternative health care modality, or of doing nothing at all, then I encourage the habit of analyzing the evidence in two ways:
1) gather all evidence which supports the modality
2) gather all evidence which opposes it
Then I encourage a weighing, and a synthesis, of these points of view, before making a decision.
I think that this is the most reliable way to minimize biases. If such a system is applied to one's own attitudes, thoughts, values, and behaviours, I think it is the most effective to promote change and growth.
References:
Myers, David. Social Psychology, fourth edition. New York: McGraw-Hill; 1993. p. 275; 294-297.
Friday, October 16, 2009
Social Psychology
Social psychology is a wonderful, enchanting field.
It is full of delightful experiments which often reveal deeply illuminating facets of human nature.
The experiments are usually so well done that it is hard to argue with the results.
Many people in mental health fields, such as psychiatry, have not studied social psychology. I never took a course in it myself. I feel like signing up for one now.
Applications of social psychology research could apply to treating anxiety & depression; resolving conflict; improving morale; reducing violence on a personal or social level; improving family & parental relationships; building social relationships, etc.
My only slight criticism of typical social psychology research is that it tends to be quite cross-sectional, and the effects or conditions studied are most often short-term (i.e. results that could typically be obtained in a study lasting a single afternoon). My strongest interest is in applied psychology, and I believe that immediate psychological effects can be important, but long-term psychological effects are of greatest importance. The brain works this way, on many levels: the brain can habituate to immediate stimuli, if those same stimuli are repeated over weeks or months. Learning in the brain can start immediately, but deeply ingrained learning (akin to language or music learning) takes months or years. So some results from a day-long study may only be as deeply insightful as administering a medication for a single day -- the effects haven't had a chance to accumulate or be subject to habituation.
In any case, I strongly encourage those interested in mental health to read through a current social psychology textbook (examples of these tend to be very well-written, readable, and entertaining), and to consider following the latest social psychology research. The biggest journal in social psychology is the Journal of Personality and Social Psychology.
It is full of delightful experiments which often reveal deeply illuminating facets of human nature.
The experiments are usually so well done that it is hard to argue with the results.
Many people in mental health fields, such as psychiatry, have not studied social psychology. I never took a course in it myself. I feel like signing up for one now.
Applications of social psychology research could apply to treating anxiety & depression; resolving conflict; improving morale; reducing violence on a personal or social level; improving family & parental relationships; building social relationships, etc.
My only slight criticism of typical social psychology research is that it tends to be quite cross-sectional, and the effects or conditions studied are most often short-term (i.e. results that could typically be obtained in a study lasting a single afternoon). My strongest interest is in applied psychology, and I believe that immediate psychological effects can be important, but long-term psychological effects are of greatest importance. The brain works this way, on many levels: the brain can habituate to immediate stimuli, if those same stimuli are repeated over weeks or months. Learning in the brain can start immediately, but deeply ingrained learning (akin to language or music learning) takes months or years. So some results from a day-long study may only be as deeply insightful as administering a medication for a single day -- the effects haven't had a chance to accumulate or be subject to habituation.
In any case, I strongly encourage those interested in mental health to read through a current social psychology textbook (examples of these tend to be very well-written, readable, and entertaining), and to consider following the latest social psychology research. The biggest journal in social psychology is the Journal of Personality and Social Psychology.
Thursday, October 15, 2009
The Psychology of Happiness
So-called "positive psychology" is, in my opinion, a very important evolving field. Surprisingly, it is a relatively new field, in terms of formal academic study. Much of the past study of psychology, psychotherapy, and psychiatry has been focused on "pathology" or on treating symptoms of illness, rather than studying or understanding happiness.
Positive psychology need not be criticized as a discipline which defines normality as a continuous happy state. Rather, I think it is a different way of looking at, and nurturing, psychological health.
I'd like to discuss this subject further, but for now, here are a few authors to look at:
-Sonja Lyubomirsky
-Barbara Fredrickson
-Martin Seligman
-Richard Layard
Some insights from this field include the following:
- a "steady diet" of positive emotion increases a sense of meaning and purpose in life, and increases the likelihood of "flourishing" in life. * While this may seem like a truism, it really isn't: it is possible to make changes in lifestyle practices, and to practice skills, to increase positive emotion in daily life. Many people coast through their daily lives, lacking positive emotion, or a sense of meaning.
-Specific suggestions for increasing positive emotion include paying attention to kindness (giving and receiving); consciously increasing awareness in the present moment; simply going outside in good weather; or meditation techniques such as "loving kindness meditation."
-Also, a variety of research has suggested that a ratio of "positivity to negativity"-- in terms of dialog with others, personal emotional experience, and I would add, dialog within your own mind--should exceed 5 to 1. Some of this research comes from looking at dialog in marriages, and interactions in other groups. We all have a tendency to criticize too much--with others and with ourselves--which leads to the positive:negative ratio diminishing, often way below 5 to 1. This suggestion does not advocate suppressing criticism or negative dialog; rather it is about balancing the negative with a large abundance of positive. If you think of any teacher or guide who has ever helped you learn something or grow as a person, I'm pretty sure you'll find that the feedback given to you was mostly positive, with only occasional, concise, gentle, criticisms. I recommend this approach in dealing with negative thoughts within your own mind -- try to balance them, aim for that 5 to 1 ratio.
*see the article "Are you Happy Now?", interview of Barbara Fredrickson by Angela Winter, Utne Sep-Oct 09, p. 62-67.
Here are some more references, which I'll comment more on later:
http://www.ncbi.nlm.nih.gov/pubmed/17716102
http://www.ncbi.nlm.nih.gov/pubmed/17115810
http://www.ncbi.nlm.nih.gov/pubmed/16045394
http://www.ncbi.nlm.nih.gov/pubmed/11894851
http://www.ncbi.nlm.nih.gov/pubmed/19485613
http://www.ncbi.nlm.nih.gov/pubmed/18954193
http://www.ncbi.nlm.nih.gov/pubmed/17356687
http://www.ncbi.nlm.nih.gov/pubmed/11934003
http://www.ncbi.nlm.nih.gov/pubmed/19301241
http://www.ncbi.nlm.nih.gov/pubmed/11315250
http://www.ncbi.nlm.nih.gov/pubmed/19056790
http://www.ncbi.nlm.nih.gov/pubmed/19056788
http://www.ncbi.nlm.nih.gov/pubmed/19227700
http://www.ncbi.nlm.nih.gov/pubmed/18841581
http://www.ncbi.nlm.nih.gov/pubmed/18356530
http://www.ncbi.nlm.nih.gov/pubmed/17479628
http://www.ncbi.nlm.nih.gov/pubmed/17327862
Positive psychology need not be criticized as a discipline which defines normality as a continuous happy state. Rather, I think it is a different way of looking at, and nurturing, psychological health.
I'd like to discuss this subject further, but for now, here are a few authors to look at:
-Sonja Lyubomirsky
-Barbara Fredrickson
-Martin Seligman
-Richard Layard
Some insights from this field include the following:
- a "steady diet" of positive emotion increases a sense of meaning and purpose in life, and increases the likelihood of "flourishing" in life. * While this may seem like a truism, it really isn't: it is possible to make changes in lifestyle practices, and to practice skills, to increase positive emotion in daily life. Many people coast through their daily lives, lacking positive emotion, or a sense of meaning.
-Specific suggestions for increasing positive emotion include paying attention to kindness (giving and receiving); consciously increasing awareness in the present moment; simply going outside in good weather; or meditation techniques such as "loving kindness meditation."
-Also, a variety of research has suggested that a ratio of "positivity to negativity"-- in terms of dialog with others, personal emotional experience, and I would add, dialog within your own mind--should exceed 5 to 1. Some of this research comes from looking at dialog in marriages, and interactions in other groups. We all have a tendency to criticize too much--with others and with ourselves--which leads to the positive:negative ratio diminishing, often way below 5 to 1. This suggestion does not advocate suppressing criticism or negative dialog; rather it is about balancing the negative with a large abundance of positive. If you think of any teacher or guide who has ever helped you learn something or grow as a person, I'm pretty sure you'll find that the feedback given to you was mostly positive, with only occasional, concise, gentle, criticisms. I recommend this approach in dealing with negative thoughts within your own mind -- try to balance them, aim for that 5 to 1 ratio.
*see the article "Are you Happy Now?", interview of Barbara Fredrickson by Angela Winter, Utne Sep-Oct 09, p. 62-67.
Here are some more references, which I'll comment more on later:
http://www.ncbi.nlm.nih.gov/pubmed/17716102
http://www.ncbi.nlm.nih.gov/pubmed/17115810
http://www.ncbi.nlm.nih.gov/pubmed/16045394
http://www.ncbi.nlm.nih.gov/pubmed/11894851
http://www.ncbi.nlm.nih.gov/pubmed/19485613
http://www.ncbi.nlm.nih.gov/pubmed/18954193
http://www.ncbi.nlm.nih.gov/pubmed/17356687
http://www.ncbi.nlm.nih.gov/pubmed/11934003
http://www.ncbi.nlm.nih.gov/pubmed/19301241
http://www.ncbi.nlm.nih.gov/pubmed/11315250
http://www.ncbi.nlm.nih.gov/pubmed/19056790
http://www.ncbi.nlm.nih.gov/pubmed/19056788
http://www.ncbi.nlm.nih.gov/pubmed/19227700
http://www.ncbi.nlm.nih.gov/pubmed/18841581
http://www.ncbi.nlm.nih.gov/pubmed/18356530
http://www.ncbi.nlm.nih.gov/pubmed/17479628
http://www.ncbi.nlm.nih.gov/pubmed/17327862
Tuesday, October 13, 2009
Increasing anxiety in recent decades...continued
This is a sequel to a previous posting (http://garthkroeker.blogspot.com/2009/06/increasing-anxiety-in-recent-decades.html)
A visitor suggested the following July 2009 article to look at regarding this subject--here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/19660164
The author, "Ian Dowbiggin, PhD", is a history professor at the University of Prince Edward Island.
I found the article quite judgmental and poorly informed.
I thought there were some good points, exploring the interaction of social dynamics, political factors, secondary gain, etc. in the evolution of diagnostic labels; and perhaps exploring the idea that we may at times over-pathologize normal human experiences, character traits, or behaviours.
But, basically the author's message seems to be that we cling to diagnostic labels to avoid taking personal responsibility for our problems--and that therapists, the self-help movement, pharmaceutical companies, etc. are all involved in perpetuating this phenomenon.
Another implied point of view was that a hundred years ago, people might well have experienced similar symptoms, but would have accepted these symptoms as part of normal life, and carried on (presumably without complaint).
To quote the author:
"The overall environment of modern day life...bestows a kind of legitimacy on the pool of
anxiety-related symptoms"
This implies that some symptoms are "legitimate" and others are not, and that it is some kind of confusing or problematic feature of modern society that anxiety symptoms are currently considered "legitimate."
I am intensely annoyed by opinion papers which do not explore the other side of the issues--
here's another side to the issue:
1) perhaps, a hundred years ago, people suffered just as much, or worse, but lacked any sort of help for what was bothering them. They therefore lived with more pain, less productivity, less enjoyment, less of a voice, more isolation, and in most cases died at a younger age.
2) The development of a vocabulary to describe psychological distress does not necessarily cause more distress. The vocabulary helps us to identify experiences that were never right in the first place. The absence of a PTSD label does not mean that symptoms secondary to trauma did not exist before the 20th century. The author somewhat mockingly suggests that some people misuse a PTSD or similar label--that perhaps only those subject to combat trauma are entitled to use it, while those subject to verbal abuse in home life are not.
The availability of financial compensation related to PTSD has undoubtedly affected the number of people describing symptoms. But the author appears to leave readers with the impression that those seeking compensation via PTSD claims are "milking the system" (this is the subtitle of the PTSD section of this paper). There is little doubt that factitious and malingered symptoms are common, particularly when there is overt secondary gain. And the issue of how therapeutic it is to have long-term financial compensation for any sort of problem, is another matter for an evidence-based and politically charged debate. But to imply that all those who make financial claims regarding PTSD are "milking the system" seems very disrespectful to me. And to imply that a system which offers such compensation is somehow problematic again seems comparable to saying that the availability of fire or theft insurance is problematic. A constructive point of view on the matter, as far as I'm concerned, would be to consider ways to make compensation systems fair and more resistant to factitious or malingered claims.
With regard to social anxiety -- it may well be that "bashfulness" has been valued and accepted in many past--and present--cultures. But I suspect that the social alienation, social frustration, loneliness, and lack of ability to start new friendships, new conversations, or to find mates, have been phenomena similarly prevalent over the centuries. Our modern terminology suggests ways for a person who is "bashful" to choose for himself or herself, whether to stoically and silently accept this set of phenomena, or to address it as a medical problem, with a variety of techniques to change the symptoms. In this way the language can be empowering, leading to the discovery and nurturance of a voice, rather than leading to a sense of "victimhood."
Perhaps the lack of a vocabulary to articulate distress causes a spurious impression that the distress does not exist, or is not worthy of consideration. A historical analogy might be something along the lines of this: terms such as "molecule", "Uranium", or "electromagnetic field," may not have been used before 1701, 1797, or 1820, but this was merely a product of ignorance, not evidence of the non-existence of these phenomena in the 1600's and prior.
It may well be true that many individuals misuse the vocabulary, or may exploit it for secondary gain. And it may well be true that some diagnostic labels introduce an iatrogenic or factitious illness (the multiple personality disorder issue could be debated along these lines). But to imply that the vocabulary itself is harmful to society is akin to saying that fire insurance is harmful, since some people misuse it by deliberately burning their houses down.
3) Similarly, the so-called self-help movement may be part of some individuals fleeing into self-pathologizing language, while ironically neglecting a healthy engagement with their lives. But in most cases, it has actually helped people to recognize, label, and improve their problems. For a start on some evidence to look at regarding this, see the following reference to a meta-analysis on self-help for anxiety disorders: http://www.ncbi.nlm.nih.gov/pubmed/16942965).
---
So, in conclusion, it is interesting to hear a different point of view. But I would expect a distinguished scholar to provide a much more balanced and insightful debate in such a paper, especially when it is published in a journal which is supposed to have high standards.
And I would certainly expect a much more thorough exploration of research evidence. The presence of 35 references in this paper may fool some readers into thinking that a reasonable survey of the research has been undertaken. Almost all of the references are themselves opinion pieces which merely support the author's point of view.
A visitor suggested the following July 2009 article to look at regarding this subject--here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/19660164
The author, "Ian Dowbiggin, PhD", is a history professor at the University of Prince Edward Island.
I found the article quite judgmental and poorly informed.
I thought there were some good points, exploring the interaction of social dynamics, political factors, secondary gain, etc. in the evolution of diagnostic labels; and perhaps exploring the idea that we may at times over-pathologize normal human experiences, character traits, or behaviours.
But, basically the author's message seems to be that we cling to diagnostic labels to avoid taking personal responsibility for our problems--and that therapists, the self-help movement, pharmaceutical companies, etc. are all involved in perpetuating this phenomenon.
Another implied point of view was that a hundred years ago, people might well have experienced similar symptoms, but would have accepted these symptoms as part of normal life, and carried on (presumably without complaint).
To quote the author:
"The overall environment of modern day life...bestows a kind of legitimacy on the pool of
anxiety-related symptoms"
This implies that some symptoms are "legitimate" and others are not, and that it is some kind of confusing or problematic feature of modern society that anxiety symptoms are currently considered "legitimate."
I am intensely annoyed by opinion papers which do not explore the other side of the issues--
here's another side to the issue:
1) perhaps, a hundred years ago, people suffered just as much, or worse, but lacked any sort of help for what was bothering them. They therefore lived with more pain, less productivity, less enjoyment, less of a voice, more isolation, and in most cases died at a younger age.
2) The development of a vocabulary to describe psychological distress does not necessarily cause more distress. The vocabulary helps us to identify experiences that were never right in the first place. The absence of a PTSD label does not mean that symptoms secondary to trauma did not exist before the 20th century. The author somewhat mockingly suggests that some people misuse a PTSD or similar label--that perhaps only those subject to combat trauma are entitled to use it, while those subject to verbal abuse in home life are not.
The availability of financial compensation related to PTSD has undoubtedly affected the number of people describing symptoms. But the author appears to leave readers with the impression that those seeking compensation via PTSD claims are "milking the system" (this is the subtitle of the PTSD section of this paper). There is little doubt that factitious and malingered symptoms are common, particularly when there is overt secondary gain. And the issue of how therapeutic it is to have long-term financial compensation for any sort of problem, is another matter for an evidence-based and politically charged debate. But to imply that all those who make financial claims regarding PTSD are "milking the system" seems very disrespectful to me. And to imply that a system which offers such compensation is somehow problematic again seems comparable to saying that the availability of fire or theft insurance is problematic. A constructive point of view on the matter, as far as I'm concerned, would be to consider ways to make compensation systems fair and more resistant to factitious or malingered claims.
With regard to social anxiety -- it may well be that "bashfulness" has been valued and accepted in many past--and present--cultures. But I suspect that the social alienation, social frustration, loneliness, and lack of ability to start new friendships, new conversations, or to find mates, have been phenomena similarly prevalent over the centuries. Our modern terminology suggests ways for a person who is "bashful" to choose for himself or herself, whether to stoically and silently accept this set of phenomena, or to address it as a medical problem, with a variety of techniques to change the symptoms. In this way the language can be empowering, leading to the discovery and nurturance of a voice, rather than leading to a sense of "victimhood."
Perhaps the lack of a vocabulary to articulate distress causes a spurious impression that the distress does not exist, or is not worthy of consideration. A historical analogy might be something along the lines of this: terms such as "molecule", "Uranium", or "electromagnetic field," may not have been used before 1701, 1797, or 1820, but this was merely a product of ignorance, not evidence of the non-existence of these phenomena in the 1600's and prior.
It may well be true that many individuals misuse the vocabulary, or may exploit it for secondary gain. And it may well be true that some diagnostic labels introduce an iatrogenic or factitious illness (the multiple personality disorder issue could be debated along these lines). But to imply that the vocabulary itself is harmful to society is akin to saying that fire insurance is harmful, since some people misuse it by deliberately burning their houses down.
3) Similarly, the so-called self-help movement may be part of some individuals fleeing into self-pathologizing language, while ironically neglecting a healthy engagement with their lives. But in most cases, it has actually helped people to recognize, label, and improve their problems. For a start on some evidence to look at regarding this, see the following reference to a meta-analysis on self-help for anxiety disorders: http://www.ncbi.nlm.nih.gov/pubmed/16942965).
---
So, in conclusion, it is interesting to hear a different point of view. But I would expect a distinguished scholar to provide a much more balanced and insightful debate in such a paper, especially when it is published in a journal which is supposed to have high standards.
And I would certainly expect a much more thorough exploration of research evidence. The presence of 35 references in this paper may fool some readers into thinking that a reasonable survey of the research has been undertaken. Almost all of the references are themselves opinion pieces which merely support the author's point of view.
Labels:
Anxiety,
Philosophical Opinions or Beliefs,
PTSD,
Research
Thursday, October 8, 2009
Is Seroquel XR better than generic quetiapine?
A supplement written by Christoph Correll for The Canadian Journal of Diagnosis (September 2009) was delivered--free--into my office mailbox the other day.
It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.
First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.
So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).
A much more meaningful chart would show one of the following:
1) the receptor blockades for each drug when the drug is administered at typical doses
2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).
The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.
Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:
1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).
2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.
I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.
Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!
Regular quetiapine has gone generic.
Quetiapine extended-release (Seroquel XR) has not.
I am bothered by the possibility of bias in Correll's article.
It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.
In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.
Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.
Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.
BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.
One individual might respond better to one drug, compared to another.
The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.
(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )
It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.
First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.
So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).
A much more meaningful chart would show one of the following:
1) the receptor blockades for each drug when the drug is administered at typical doses
2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).
The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.
Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:
1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).
2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.
I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.
Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!
Regular quetiapine has gone generic.
Quetiapine extended-release (Seroquel XR) has not.
I am bothered by the possibility of bias in Correll's article.
It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.
In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.
Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.
Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.
BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.
One individual might respond better to one drug, compared to another.
The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.
(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )
Monday, October 5, 2009
Hallucinations
Hallucinations are perceptions which take place in the absence of a stimulus from the peripheral or sensory nervous system.
They may be classified in a variety of different ways (this is an incomplete list):
1)by sensory modality
a) auditory: these are most common, and may be perceived as voices speaking or mumbling; musical sounds; or other more cacophonous sounds
b) visual: these can occur more commonly in delirious states or medical illnesses affecting the brain. Many people experience normal, but unsettling, visual hallucinations, just when falling asleep or waking up.
c) tactile: these are most common in chemical intoxication syndromes, such as with cocaine.
d) olfactory: more common in medical illness
2) by positionality
-when describing hallucinated voices, if the voices are perceived to originate inside the head, or to not have any perceived origin, then they could be called "pseudohallucinations." If the voices are perceived to originate from a particular place, such as from the ceiling or from across the room, then they could be called "hallucinations" or "true hallucinations." This terminology has been used to distinguish between the hallucinations in schizophrenia and psychotic mood disorders (which are typically "true hallucinations") and those experienced in non-psychotic disorders (pseudohallucinations are more typically--though not invariably--associated with dissociative disorders, borderline personality, or PTSD).
3) by insight
An individual experiencing a "psychotic hallucination" will attribute the phenomenon to stimuli outside of the brain. An individual experiencing a "non-psychotic hallucination" will attribute the phenomenon to his or her own brain activity, and recognize the absence of an external stimulus to account for the experience. In most cases, "insight" fluctuates on a continuum, and many individuals experiencing hallucinations will have some intellectual understanding of their perceptions being hallucinatory, but still feel on a visceral level that the perceptions are "real."
4) by character
Voices in particular can be described in a variety of ways. So-called "first rank symptoms of schizophrenia" include hallucinated voices which comment on a person's behavior, or include several voices which converse with each other.
The quality of the voice can vary, with harsh, angry, critical tones more common in psychotic depression, and neutral emotionality more common in schizophrenic states.
--all of these above descriptions are incomplete, and associations between one type of hallucination and a specific "diagnosis" are imperfect. A great deal of variation exists--
It is probably true that some hallucinations are factitious (i.e. the person is not actually hallucinating, despite claiming to), but of course this would be virtually impossible to prove. Something like functional brain imaging might be an interesting, though impractical, tool, to examine this phenomenon. People with psychotic disorders or borderline personality might at times describe factitious hallucinatory phenomena in order to communicate emotional distress or need to caregivers. Or sometimes the phenomena may convey some type of figurative meaning. The motivation to do this might not always be conscious.
There are a variety of ways to treat hallucinations.
In my opinion, the single most effective treatment is an antipsychotic medication. Hallucinations due to almost any cause are likely to diminish with antipsychotic medication treatment.
There is evolving evidence that CBT and other psychotherapy can help with hallucinations. Here are some references:
http://www.ncbi.nlm.nih.gov/pubmed/19176275
http://www.ncbi.nlm.nih.gov/pubmed/9827323
Some individuals may not be bothered by their hallucinations. In this case, it may sometimes be more the physician's agenda than the patient's to "treat" the symptom. Yet, it is probably true that active hallucinations in psychotic disorders are harbingers of other worsening symptoms, so it may be important to treat the symptom early, even if it is not troublesome.
Other types of behavioral tactics can help, including listening to music, wearing ear plugs, other distractions, etc. In dealing with pseudohallucinations or non-psychotic hallucinations, "mindfulness" exercises may be quite important. A well-boundaried psychodynamically-oriented therapy structure could be very helpful for non-psychotic hallucinations or pseudohallucinations associated with borderline personality dynamics or PTSD. Care would need to be taken, in these cases, not to focus excessively or "deeply" on the hallucinations, particularly without the patient's clear consent, since such a dialog could intensify the symptoms.
They may be classified in a variety of different ways (this is an incomplete list):
1)by sensory modality
a) auditory: these are most common, and may be perceived as voices speaking or mumbling; musical sounds; or other more cacophonous sounds
b) visual: these can occur more commonly in delirious states or medical illnesses affecting the brain. Many people experience normal, but unsettling, visual hallucinations, just when falling asleep or waking up.
c) tactile: these are most common in chemical intoxication syndromes, such as with cocaine.
d) olfactory: more common in medical illness
2) by positionality
-when describing hallucinated voices, if the voices are perceived to originate inside the head, or to not have any perceived origin, then they could be called "pseudohallucinations." If the voices are perceived to originate from a particular place, such as from the ceiling or from across the room, then they could be called "hallucinations" or "true hallucinations." This terminology has been used to distinguish between the hallucinations in schizophrenia and psychotic mood disorders (which are typically "true hallucinations") and those experienced in non-psychotic disorders (pseudohallucinations are more typically--though not invariably--associated with dissociative disorders, borderline personality, or PTSD).
3) by insight
An individual experiencing a "psychotic hallucination" will attribute the phenomenon to stimuli outside of the brain. An individual experiencing a "non-psychotic hallucination" will attribute the phenomenon to his or her own brain activity, and recognize the absence of an external stimulus to account for the experience. In most cases, "insight" fluctuates on a continuum, and many individuals experiencing hallucinations will have some intellectual understanding of their perceptions being hallucinatory, but still feel on a visceral level that the perceptions are "real."
4) by character
Voices in particular can be described in a variety of ways. So-called "first rank symptoms of schizophrenia" include hallucinated voices which comment on a person's behavior, or include several voices which converse with each other.
The quality of the voice can vary, with harsh, angry, critical tones more common in psychotic depression, and neutral emotionality more common in schizophrenic states.
--all of these above descriptions are incomplete, and associations between one type of hallucination and a specific "diagnosis" are imperfect. A great deal of variation exists--
It is probably true that some hallucinations are factitious (i.e. the person is not actually hallucinating, despite claiming to), but of course this would be virtually impossible to prove. Something like functional brain imaging might be an interesting, though impractical, tool, to examine this phenomenon. People with psychotic disorders or borderline personality might at times describe factitious hallucinatory phenomena in order to communicate emotional distress or need to caregivers. Or sometimes the phenomena may convey some type of figurative meaning. The motivation to do this might not always be conscious.
There are a variety of ways to treat hallucinations.
In my opinion, the single most effective treatment is an antipsychotic medication. Hallucinations due to almost any cause are likely to diminish with antipsychotic medication treatment.
There is evolving evidence that CBT and other psychotherapy can help with hallucinations. Here are some references:
http://www.ncbi.nlm.nih.gov/pubmed/19176275
http://www.ncbi.nlm.nih.gov/pubmed/9827323
Some individuals may not be bothered by their hallucinations. In this case, it may sometimes be more the physician's agenda than the patient's to "treat" the symptom. Yet, it is probably true that active hallucinations in psychotic disorders are harbingers of other worsening symptoms, so it may be important to treat the symptom early, even if it is not troublesome.
Other types of behavioral tactics can help, including listening to music, wearing ear plugs, other distractions, etc. In dealing with pseudohallucinations or non-psychotic hallucinations, "mindfulness" exercises may be quite important. A well-boundaried psychodynamically-oriented therapy structure could be very helpful for non-psychotic hallucinations or pseudohallucinations associated with borderline personality dynamics or PTSD. Care would need to be taken, in these cases, not to focus excessively or "deeply" on the hallucinations, particularly without the patient's clear consent, since such a dialog could intensify the symptoms.
Mediterranean diet is good for your brain
In this month's Archives of General Psychiatry, a study by Sanchez-Villegas et al. is published showing a strong association between lower rates of depression, and consuming a Mediterranean diet (lots of vegetables, fruits, nuts, whole grains, and fish, with low intake of meat, moderate intake of alcohol & dairy, and lots of monounsaturated fatty acids compared to saturated fatty acids). Data was gathered prospectively during a period averaging over 4 years, and was based on following about 10 000 initially healthy students in Spain who reported food intake on questionnaires.
I'll have to look closely at the full text of the article. I'm interested to consider the question of whether the results strongly suggest causation, or whether the results could be due to non-causal association. That is, perhaps people in Spain with a higher tendency to become depressed tend to choose non-Mediterranean diets. Another issue is cultural: the study was done in Spain, where a Mediterranean diet may be associated with certain--perhaps more traditional--cultural or subcultural features, and this cultural factor may then mediate the association with depressive risk.
In any case, in the meantime, given the preponderance of other data showing health benefits from a Mediterranean-style diet, I wholeheartedly (!) recommend consuming more nuts, vegetables, olive oil, fish, whole grains, and fruit; and less red meat.
I'll have to look closely at the full text of the article. I'm interested to consider the question of whether the results strongly suggest causation, or whether the results could be due to non-causal association. That is, perhaps people in Spain with a higher tendency to become depressed tend to choose non-Mediterranean diets. Another issue is cultural: the study was done in Spain, where a Mediterranean diet may be associated with certain--perhaps more traditional--cultural or subcultural features, and this cultural factor may then mediate the association with depressive risk.
In any case, in the meantime, given the preponderance of other data showing health benefits from a Mediterranean-style diet, I wholeheartedly (!) recommend consuming more nuts, vegetables, olive oil, fish, whole grains, and fruit; and less red meat.
The need for CME
Here's another article from "the last psychiatrist" on CME:
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more
Another insightful article, but pretty cynical!
But here are some of my opinions on this one:
1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.
I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more
Another insightful article, but pretty cynical!
But here are some of my opinions on this one:
1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.
I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.
Opinions on mistakes psychiatrists make
Here's another interesting link from "the last psychiatrist" blog:
http://thelastpsychiatrist.com/2006/11/post_2.html#more
I agree with many of his points.
But here are a few counterpoints, in order:
1.) I think some psychiatrists talk too little. There's a difference between nervous or inappropriate chatter diluting or interrupting a patient's opportunity to speak, and an engaged dialog focusing on process or content of a problem. There is a trend in psychiatric practice, founded or emphasized by psychoanalysis, that the therapist is to be nearly silent. Sometimes I think these silences are unhelpful, unnecessary, inefficient, even harmful. There are some patients I can think of for whom silence in a social context is extremely uncomfortable, and certainly not an opportunity for them to learn in therapy. Therapy in some settings can be an exercise in meaningful dialog, active social skills practice, or simply a chance to converse or laugh spontaneously.
I probably speak too much, myself--and I need to keep my mouth shut a little more often. I have to keep an eye on this one.
It is probably better for most psychiatrists to err on the side of speaking too little, I would agree. An inappropriately overtalkative therapist is probably worse than an inappropriately undertalkative one. But I think many of us have been taught to be so silent that we cannot be fully present, intuitively, personally, intellectually, to help someone optimally. In these cases, sometimes the tradition of therapeutic silence can suppress healthy spontaneity, positivity, and humour in a way which only delays or obstructs a patient's therapy experience.
2) I agree strongly with this one--especially when history details are ruminated about interminably during the first few sessions.
However, I do think that a framework to be comprehensive is important. And sometimes it is valuable, in my opinion, to entirely review the whole history, after seeing a patient for a year, or for many years. There is so much focus on comprehensive history-taking during the first few sessions, or the first hour, that we forget to revisit or deepen this understanding after knowing a patient much better, later on. Sometimes whole elements of a patient's history can be forgotten, because they were only talked about once, during the first session.
There is a professional standard of doing a "comprehensive psychiatric history" in a single interview of no longer than 55 minutes. There may even be a certain bravado among residents, or an admiration for someone who can "get the most information" in that single hour. I object to this being a dogmatic standard. A psychiatric history, as a personal story, may take years to understand well, and even then the story is never complete. It can be quite arrogant to assume that a single brief interview (which, if optimal exchange of "facts" is to take place, can sound like an interrogation) can lead to a comprehensive understanding of a patient.
I do believe, though, that certain elements of comprehensiveness should be aimed for, and aimed for early. For example, it is very important to ask about someone's medical ailments, about substance use, about various symptoms the person may be too embarrassed to mention unless asked directly, etc. Otherwise an underlying problem could be entirely missed, and the ensuing therapy could be very ineffective or even deleterious.
Also, some individual patients may feel a benefit or relief to go through a very comprehensive historical review in the first few sessions, with the structure of the dialog supplied mainly from the therapist. Other individual patients may feel more comfortable, or find it more beneficial, to supply the structure of their story themselves. So maybe it's important not to make strong imperative statements on this question: as with so many other things in psychiatry, a lot depends on the individual situation.
3. I think it's important not to ignore ANY habitual behavior that could be harmful. Yet perhaps some times are better than others to address or push for things like smoking or soft-drink cessation: a person with a chronically unstable mood disorder may require improved mood stability (some of which may actually come from cigarette smoking, in a short-term sense anyway), before they are able to embark on a quit-smoking plan.
4. not much to add here
5. Well, point taken. I've written a post about psychiatry and politics before, and suggested a kind of detached, "monastic role." But on the other hand, any person or group may have a certain influence--the article here suggests basically that it's none of psychiatry's business to deal with political or social policy. Maybe not. But the fact is, psychiatry does have some influence to effect social change. And, in my opinion, it is obvious that social and political dynamics are driven by forces that are similar to the dynamics which operate in a single family, or in an individual's mind. So, if there is any wisdom in psychiatry, it could certainly be applicable to the political arena. Unfortunately, it appears to me that psychiatrists I have seen getting involved in politics or other group dynamics are just as swept up in dysfunctional conflict, etc. as anyone else.
But if there's something that psychiatry can do to help with war or world hunger, etc. -- why not? In some historic situations an unlikely organized group has come to the great aid of a marginalized or persecuted group in need of relief or justice, even though the organized group didn't necessarily have any specialized knowledge of the matter they were dealing with.
6. I strongly agree. I prefer to offer therapy to most people I see. And I think most people do not have adequate opportunities to experience therapy. Yet I do also observe that many individuals could be treated with a medication prescribed by a gp, and simply experience resolution of their symptoms. Subsequent "therapy" is done by the individual in their daily life, and does not require a "therapist." In these cases, the medication may not be needed anymore, maybe after a year or so. Sometimes therapists may end up offering something that isn't really needed, or may aggrandize the role or importance of "therapy" (we studied all those years to learn to be therapists, after all--therefore a therapist's view on the matter may be quite biased), when occasionally the best therapy of all could simply be self-provided. Yet, of course, many situations are not so simple at all, and that's where a therapy experience can be very, very important. I support the idea of respecting the patient's individual wishes on this matter, after providing the best possible presentation of benefits and risks of different options. Of course, we're all biased in how we understand this benefit/risk profile.
7. some interesting points here...but subject to debate. Addressing these complex subjects in an imperative manner makes me uncomfortable.
8. polypharmacy should certainly not be a norm, though intelligent use of combination therapies, in conjunction with a clear understanding of side-effect risks, can sometimes be helpful. Some of the statements made in this section have actually not been studied well, for example it makes no pharmacological sense to combine two different SSRI antidepressants at the same time. But there has not been a body of research data PROVING that such a combination is in fact ineffectual. Therefore, before we scoff at the practitioner who prescribes two SSRIs at once, I think we should look at the empirical result--since there are no prospective randomized studies, the best we can do is see whether the individual patient is feeling better, or not.
9. I'm not a big fan of "diagnosis", but sometimes, and for some individuals, it can be part of a very helpful therapy experience, to be able to give a set of problems a name. This name, this category, may lead the person to understand more about causes & solutions. Narrative therapy makes a good use, I think, of "naming" (a variant of "diagnosing") as a very useful therapeutic construct.
10. There isn't a number 10 here, but the comments at the end of this article were good.
http://thelastpsychiatrist.com/2006/11/post_2.html#more
I agree with many of his points.
But here are a few counterpoints, in order:
1.) I think some psychiatrists talk too little. There's a difference between nervous or inappropriate chatter diluting or interrupting a patient's opportunity to speak, and an engaged dialog focusing on process or content of a problem. There is a trend in psychiatric practice, founded or emphasized by psychoanalysis, that the therapist is to be nearly silent. Sometimes I think these silences are unhelpful, unnecessary, inefficient, even harmful. There are some patients I can think of for whom silence in a social context is extremely uncomfortable, and certainly not an opportunity for them to learn in therapy. Therapy in some settings can be an exercise in meaningful dialog, active social skills practice, or simply a chance to converse or laugh spontaneously.
I probably speak too much, myself--and I need to keep my mouth shut a little more often. I have to keep an eye on this one.
It is probably better for most psychiatrists to err on the side of speaking too little, I would agree. An inappropriately overtalkative therapist is probably worse than an inappropriately undertalkative one. But I think many of us have been taught to be so silent that we cannot be fully present, intuitively, personally, intellectually, to help someone optimally. In these cases, sometimes the tradition of therapeutic silence can suppress healthy spontaneity, positivity, and humour in a way which only delays or obstructs a patient's therapy experience.
2) I agree strongly with this one--especially when history details are ruminated about interminably during the first few sessions.
However, I do think that a framework to be comprehensive is important. And sometimes it is valuable, in my opinion, to entirely review the whole history, after seeing a patient for a year, or for many years. There is so much focus on comprehensive history-taking during the first few sessions, or the first hour, that we forget to revisit or deepen this understanding after knowing a patient much better, later on. Sometimes whole elements of a patient's history can be forgotten, because they were only talked about once, during the first session.
There is a professional standard of doing a "comprehensive psychiatric history" in a single interview of no longer than 55 minutes. There may even be a certain bravado among residents, or an admiration for someone who can "get the most information" in that single hour. I object to this being a dogmatic standard. A psychiatric history, as a personal story, may take years to understand well, and even then the story is never complete. It can be quite arrogant to assume that a single brief interview (which, if optimal exchange of "facts" is to take place, can sound like an interrogation) can lead to a comprehensive understanding of a patient.
I do believe, though, that certain elements of comprehensiveness should be aimed for, and aimed for early. For example, it is very important to ask about someone's medical ailments, about substance use, about various symptoms the person may be too embarrassed to mention unless asked directly, etc. Otherwise an underlying problem could be entirely missed, and the ensuing therapy could be very ineffective or even deleterious.
Also, some individual patients may feel a benefit or relief to go through a very comprehensive historical review in the first few sessions, with the structure of the dialog supplied mainly from the therapist. Other individual patients may feel more comfortable, or find it more beneficial, to supply the structure of their story themselves. So maybe it's important not to make strong imperative statements on this question: as with so many other things in psychiatry, a lot depends on the individual situation.
3. I think it's important not to ignore ANY habitual behavior that could be harmful. Yet perhaps some times are better than others to address or push for things like smoking or soft-drink cessation: a person with a chronically unstable mood disorder may require improved mood stability (some of which may actually come from cigarette smoking, in a short-term sense anyway), before they are able to embark on a quit-smoking plan.
4. not much to add here
5. Well, point taken. I've written a post about psychiatry and politics before, and suggested a kind of detached, "monastic role." But on the other hand, any person or group may have a certain influence--the article here suggests basically that it's none of psychiatry's business to deal with political or social policy. Maybe not. But the fact is, psychiatry does have some influence to effect social change. And, in my opinion, it is obvious that social and political dynamics are driven by forces that are similar to the dynamics which operate in a single family, or in an individual's mind. So, if there is any wisdom in psychiatry, it could certainly be applicable to the political arena. Unfortunately, it appears to me that psychiatrists I have seen getting involved in politics or other group dynamics are just as swept up in dysfunctional conflict, etc. as anyone else.
But if there's something that psychiatry can do to help with war or world hunger, etc. -- why not? In some historic situations an unlikely organized group has come to the great aid of a marginalized or persecuted group in need of relief or justice, even though the organized group didn't necessarily have any specialized knowledge of the matter they were dealing with.
6. I strongly agree. I prefer to offer therapy to most people I see. And I think most people do not have adequate opportunities to experience therapy. Yet I do also observe that many individuals could be treated with a medication prescribed by a gp, and simply experience resolution of their symptoms. Subsequent "therapy" is done by the individual in their daily life, and does not require a "therapist." In these cases, the medication may not be needed anymore, maybe after a year or so. Sometimes therapists may end up offering something that isn't really needed, or may aggrandize the role or importance of "therapy" (we studied all those years to learn to be therapists, after all--therefore a therapist's view on the matter may be quite biased), when occasionally the best therapy of all could simply be self-provided. Yet, of course, many situations are not so simple at all, and that's where a therapy experience can be very, very important. I support the idea of respecting the patient's individual wishes on this matter, after providing the best possible presentation of benefits and risks of different options. Of course, we're all biased in how we understand this benefit/risk profile.
7. some interesting points here...but subject to debate. Addressing these complex subjects in an imperative manner makes me uncomfortable.
8. polypharmacy should certainly not be a norm, though intelligent use of combination therapies, in conjunction with a clear understanding of side-effect risks, can sometimes be helpful. Some of the statements made in this section have actually not been studied well, for example it makes no pharmacological sense to combine two different SSRI antidepressants at the same time. But there has not been a body of research data PROVING that such a combination is in fact ineffectual. Therefore, before we scoff at the practitioner who prescribes two SSRIs at once, I think we should look at the empirical result--since there are no prospective randomized studies, the best we can do is see whether the individual patient is feeling better, or not.
9. I'm not a big fan of "diagnosis", but sometimes, and for some individuals, it can be part of a very helpful therapy experience, to be able to give a set of problems a name. This name, this category, may lead the person to understand more about causes & solutions. Narrative therapy makes a good use, I think, of "naming" (a variant of "diagnosing") as a very useful therapeutic construct.
10. There isn't a number 10 here, but the comments at the end of this article were good.
Biased Presentation of statistical data: LOCF vs. MMRM
This is a brief posting about biostatistics.
In clinical trials, some subjects drop out.
The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.
LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.
One technique or the other may generate different conclusions, different numbers to present.
The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:
http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more
While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0
In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).
In clinical trials, some subjects drop out.
The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.
LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.
One technique or the other may generate different conclusions, different numbers to present.
The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:
http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more
While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0
In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).
Which is better, a simple drug or a complex drug?
Here is another critique of medication marketing trends in psychiatry:
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more
I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps
I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).
These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.
By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.
Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).
I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.
So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.
Pregnancy & Depressive Relapse
I was looking at an article in JAMA from 2006, which was about pregnant women taking antidepressants. They were followed through pregnancy, and depressive relapses were related to changes in antidepressant dose. Here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
http://www.ncbi.nlm.nih.gov/pubmed/16449615
The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html
Yet the JAMA study is not uninformative.
And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.
It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.
Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.
The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.
Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.
Subscribe to:
Posts (Atom)