Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts

Tuesday, November 10, 2015

The Business of Psychological Questionnaires

Questionnaires are certainly in vogue in mental health research.  Often they are referred to in technical-sounding jargon, for example it is common to call a questionnaire an "instrument"  or a "measurement tool."

There are good reasons to have well-standardized questionnaires.  In research, it is useful if people across the world are all using a similar type of questionnaire, so that comparisons can be made more easily and clearly.

In psychotherapy or other mental health practice, there is evidence that obtaining regular feedback from patients or clients can be valuable to improve the quality of the therapy, and to prevent mistakes.  One of the leaders in showing the importance of this is Michael Lambert, an esteemed psychologist and psychotherapy researcher from Brigham Young University.  In a nutshell, his research shows us that problems can occur in psychotherapy without the therapist realizing it:  the patient or client could be developing new symptoms, detaching or losing interest in the therapy, feeling upset or disappointed with the therapist, or even developing a life-threatening emergency, but the therapist may not know this, because it is not talked about or asked about in the session.  This could be because the patient is inhibited to share this information, but it could also be simply because the problem was never inquired about.  In therapy sessions, just like with any other interaction, one can follow a certain narrative pathway habitually, therefore missing things that could be quietly going wrong in the background.

So Lambert has developed a questionnaire called the OQ-45, which consists of 45 simple questions covering everything from mood, anxiety, relationship satisfaction, loneliness, drinking, family life, work life, cognition, and physical health.  The idea is for patients or clients to fill in this questionnaire frequently, maybe even before every therapy appointment, so that no potential evolving problem area would be "missed."   The questionnaire would only take a few minutes to fill out, and could be done in the waiting room before an appointment.    Samples of the OQ-45 can be found in an internet search.  


I believe that this type of questionnaire is useful.  Certainly we have to respect Lambert's many years of research, to acknowledge that feedback of this type can improve therapy.

But the therapeutic benefit of this is not due to some special property of the questionnaire itself!  And the therapeutic benefit does not require the sophisticated statistical analysis that is offered to purchasers of the questionnaire!  The benefit of this is simply to do a review of symptoms regularly with patients or clients.  

Questionnaires in psychology have become a business.  For hundreds of dollars, one can sign up to receive copies of a questionnaire, scoring manuals, or perhaps an on-line entry and scoring package, which may produce attractive graphs of results.

I believe that it is absurd--in most cases--to have to pay for something like this.  The therapeutic principle here is of simply keeping track of a wide range of symptoms or problems systematically.   The technology here is not a sophisticated x-ray machine or microscope -- rather, they are sets of simple questions such as "I'm a good person" or "My body hurts" (to be rated from 0-4).

I have jokingly thought of creating a questionnaire, to be marketed, with a full statistical analysis package and online access, called the "How Are You Doing" instrument (the HAY-D-1).  It would consist of a single question, "How are you doing?"  with the opportunity to choose from one of 5 responses.    Perhaps there could be a published article demonstrating its reliability, validity, and correlations with other established research instruments. 

Understandably, many researchers have worked long and hard to show useful results from their work.  And it could be very desirable for them to have a way to earn a financial reward from the fruits of their labor.  I suppose, in a free society, it is quite reasonable for people to attempt to sell such things, if people are willing to buy them.

But when there is this type of marketing and financial dealing going on, it can increase biases on the part of both the seller and the buyer.  The buyer, having paid good money for questionnaires or "instruments," is more likely to think highly of their acquisition, due to cognitive bias (think again of Daniel Kahneman's work showing such effects).  Perhaps therapists are more likely to rely on such purchased questionnaires rather than simply creating their own.

I think it could be useful, if questionnaires are to be used at all, to create custom symptom review questions.  There is also some evidence that questions about the therapeutic alliance could be pertinent to therapeutic progress; these are absent from many symptom review surveys, including the OQ-45.

A nice idea in CBT is to have the clients or patients be actively involved in assessing and planning their own progress, instead of having the therapist be the "assessor."  So, it could be a useful therapeutic exercise for clients or patients to design their own questionnaires, using their own language, and their own scale!  The therapist could encourage and suggest a wide range of categories of questions to be followed, covering areas of physical, social, occupational, cultural, and psychological health, as well as a category about the therapeutic alliance, but the questions themselves could be designed by the client or patient!    If statistical analysis was felt to be interesting or useful, we could easily design a simple app to create graphs, or use a spreadsheet -- we would not have to pay an extra fee for this!

So I support the idea of regularly conducting broad symptom reviews in psychotherapy, but I do not believe it is necessary to buy questionnaire packages.  It could be even better to design one's own package, or collaborate with a patient or client to design a custom, personalized survey.  

Monday, February 13, 2012

Statistics in Psychiatry & Medicine

This is a continuation of my thoughts about this subject. 

Statistical analysis is extremely important to understand cause & effect!  A very strong factor in this issue has to do with the way the human mind interprets data;  Daniel Kahneman, the Nobel laureate psychologist, is a great expert on this subject, and I strongly recommend a fantastic book of his called Thinking, Fast and Slow.  I'd like to review his book in much more detail later, but as a start I will say that it clearly shows how the mind is loaded with powerful biases, which cause us to make rapid but erroneous impressions about cause & effect, largely because a statistical treatment of information is outside the capacity of the rapid reflexive intuition which dominates our moment-to-moment cognitions.    And, of course, a lack of education about statistics and probability eliminates the possibility that the more rational part of our minds can overrule the reflexive, intuitive side.   Much of Kahneman's work has to do with how the mind intrinsically attempts to make sense of statistical information -- often with incorrect conclusions.  The implication here is that we must cooly calculate probabilities in order to interpret a body of data, and resist the urge to use "intuition," especially in a research study.    

I do believe that a formal statistical treatment of data is much more common now in published research.  But I am now going to argue for something that seems entirely contradictory to what I've just said above!  I'll proceed by way of a fictitious example:

Suppose 1000 people are sampled, (the sample size being carefully chosen using a statistical calculation, to elicit a significant effect size if truly present with a small probability of this effect being due to chance), all of whom with a DSM diagnosis of major depressive disorder, all of whom with HAM-D scores between 25 and 30.  And suppose they are divided into two groups of 500, matched for gender, demographics, severity, chronicity, etc.  Then suppose one group is given a treatment such as psychotherapy or a medication, and the other group is given a placebo treatment.  This could continue for 3 months, then the groups could be switched, so that every person in the study would at some point receive the active treatment and at another point the placebo. 

This is a typical design for treatment studies, and I think it is very strong. If the result of the study is positive, this is very clear evidence that the active treatment is useful. 

But suppose the result of the study is negative.  What could this mean?  Most of us would conclude that the active treatment is therefore not useful.   --But I believe this is an incorrect conclusion!-- 

Suppose, yet again, that this is a study of people complaining of severe headaches, carefully controlled for matching severity and chronicity, etc.  And suppose the treatment offered was neurosurgery or placebo.  I think that the results-- carefully summarized by a statistical statement--would show that neurosurgery does not exceed placebo (in fact, I'll bet the neurosurgery group would do a lot worse!) for treatment of headache. 

Yet -- in this group of 1000 people, it is possible that 1 or 2 of these headache sufferers was having a headache due to a surgically curable brain tumor, or a hematoma.  These 1 or 2 patients would have a high chance of being cured by a surgical procedure, and some other therapy effective for most other headache sufferers (e.g. a tryptan for migraine, or an analgesic, or relaxation exercises, etc.) would have either no effect or would have a spurious benefit (relaxation might make the headache pain from a tumor temporarily better -- and ironically would delay a definitive cure!) 

Likewise, in a psychiatric treatment study, it may be possible that subtypes exist (perhaps based on genotype or some other factor currently not well understood), which respond very well to specific therapies, despite the majority of people in the group sharing similar symptoms not responding well to these same therapies.  For example, some individual depressed patients may have a unique characteristic (either biologically or psychologically) which might make them respond to a treatment that would have no useful effect for the majority.

With the most common statistical analyses done and presented in psychiatric and other medical research studies, there would usually be no way to detect this phenomenon:  negative studies would influence practitioners to abandon the treatment strategy for the whole group.  

How can this be remedied?  I think the simplest method would be trivial:  all research studies should include in the publication every single piece of data gathered!  If there is a cohort of 1000 people, there should be a chart or a graph showing the symptom changes over time of every single individual.  There would be a messy graph with 1000 lines on it (which is a reason this is not done, of course!) but there would be much less risk that an interesting outlier would be missed!  If most of the thousand individuals had no change in symptoms, there would be a huge mass of flat lines across the middle of the chart.  But if a few individuals had a total, remarkable cure of symptoms, these individuals would stand out prominently on such a chart.  Ironically, in order to detect such phenomena, we would have to temporarily leave aside the statistical tools which we had intended to use, and "eyeball" the data.  So intuition could still have a very important role to play in statistics & research! 

After "eyeballing" the complete setof data from every individual, I do agree that this would have to lead to another formal hypothesis, which would subsequently have to be tested using a different study design, designed specifically to pick up such outliers, then a formal statistical calculation procedure would have to be used to evaluate whether the treatment would be effective for this group.  (e.g. the tiny group of headache sufferers specifically with a mass evident on a CT brain scan could enter a neurosurgery treatment study, to clearly show whether the surgery is better than placebo for this group).

I suspect that in many psychiatric conditions, there are subtypes not currently known about or well-characterized by DSM categorization.   Genome studies should be an interesting area in the future decades, to further subcategorize patients sharing identical symptoms, but who might respond very differently to specific treatment strategies. 

In the meantime, though, I think it is important to recognize that a negative study, even if done with very good study design and statistical analysis, does not prove that the treatment in question is ineffective for EVERYONE with a particular symptom cluster.  There might possibly be individuals who would respond well to such a treatment.  We could know this possibility better if the COMPLETE set of data results for each individual patient were published with all research studies.  

Another complaint I have about the statistics & research culture has to do with the term "significant."  I believe that "significance" is a construct that contradicts the whole point of doing a careful statistical analysis, because it requires a pronouncement of some particular probability range being called "significant" and others "insignificant."  Often times, a p value less than 0.05 is considered "significant".  The trouble with this is that the p value speaks for itself, it does not require a human interpretive construct or threshold to call something "significant" or not.  I believe that studies should simply report the p-value, and not call the results "significant" or not.  This way, 2 studies which yield p values of 0.04 and 0.07 could be seen to show much more similar results than if you called the first study "significant" and the second "insignificant."   There may be some instances in which a p-value less than 0.25 could still usefully guide a long-shot trial of therapy -- this p value would be very useful to know exactly, rather than simply reading that this was a "very insignificant" result.  Similarly, other types of treatments might demand that the p value be less than 0.0001 in order to safely guide a decision.   Having a research culture in which p<0.05="significant" dilutes the power and meaning of the analysis, in my opinion, and arbitrarily introduces a type of cultural judgment which is out of place for careful scientists. 

Thursday, January 21, 2010

Rating Scales: limitations & ideas for change

A visitor's comment from one of my previous posts reminded me of an issue I'd thought about before.

In mental health research, symptom scales are often used to measure therapeutic improvement. In depression, the most common scales are the Hamilton Depression Rating Scale (HDRS), the Montgomery-Ashberg Depression Rating Scale (MADRS), or sometimes the Beck Depression Inventory (BDI). The first two examples involve an interviewer assigning a score to a variety of different symptoms or signs. The last example is a scale which is filled out by a patient.

Here are examples of questions from the HDRS, with associated ranges of scoring:
depressed mood (0-4); decreased work & activities (0-4); social withdrawal (0-4); sexual symptoms (0-2); GI symptoms (0-2); weight loss (0-2); weight gain (0-2); appetite increase (0-3); increased eating (0-3); carbohydrate craving (0-3); insomnia (0-6); hypersomnia (0-4); general somatic symptoms (0-2); fatigue (0-4); guilt (0-4); suicidal thoughts/behaviours (0-4); psychological manifestations of anxiety (0-4); somatic manifestations of anxiety (0-4); hypochondriasis (0-4); insight (0-2); motor slowing (0-4); agitation (0-4); diurnal variation (0-2); reverse diurnal variation (0-3); depersonalization (0-4); paranoia (0-3); OCD symptoms (0-2)

One can see from this list that depressive syndromes which have many physical manifestations will obviously score much higher. The highest possible score on the 29-item HDRS is 89. It is likely that physical manifestations of acute depression resolve more quickly, particularly in response to medications. Therefore, the finding that more severe depressions have better response to medication could be simply an artifact of the fact that physical symptoms respond better and more quickly to physical treatments.

A person who is eating and sleeping poorly, is tired, feels and looks physically ill, who is not working, who is not seeing friends as much, and whose symptoms fluctuate in the day, would already get an HDRS score of up to 30 -- without actually feeling depressed or anxious at all! A person feeling very depressed, struggling through life with little pleasure, meaning, satisfaction, or joy -- but sleeping ok, eating ok, and forcing self through daily routines such as work, social relationships, etc. -- might only get a score of 4-6 on this scale.

I acknowledge that the many questions on the HDRS cover a variety of important symptom areas, and improvement in any one of these domains can be very significant.

But -- a big problem of the scale, for me, is that the relative significance of the different symptoms is arbitrarily fixed by the structure of the questionnaire. So, for example, are the 4 points for fatigue of equivalent importance to the 4 points for guilt, or social withdrawal, or depressed mood? Would different individuals rate the relative importance of these symptoms differently? Maybe some people might prefer to sleep better, rather than socialize with greater ease. Also, perhaps some of the symptom questions deserve to be "non-linear," or context-dependent. So, for example, perhaps mild or intermittent depressed mood might deserve a score of only "1". Moderately depressed mood might warrant a score of "5". Severe depressive mood might warrant a score of "20". Or, relentless moderate symptoms over a period of years might warrant a score of "20", while only short-term or episodic moderate symptoms might warrant a score of "5".

It would be interesting to change the weighting of these symptom scores, on an individualized basis.

Also, it would be interesting to see the results of depression treatment studies portrayed with all the separate symptom categories broken down (i.e. to see how the treatment changed each item on the HDRS). Many researchers or statisticians would complain that to portray, or make conclusions, about so many results at once, would reduce the statistical significance. Statistically, a so-called "Bonferroni correction" is necessary if multiple hypotheses are being made simultaneously: if n hypotheses are made, the statistical significance is reduced by a factor of 1/n. Based on this statistical idea, most researchers prefer to analyze just a single quantity, such as the HDRS score, instead of looking at each component of the score separately.

But, this analysis dilutes the data from any study, in the same way that the analysis of artworks in a museum would be diluted if each piece were summarized only by its mass or area.

A more complete analysis would portray every category at once. A graphical presentation would be reasonable, perhaps taking the form of a 3-d surface (once again). The x-axis could represent the different symptom areas (or scores on each item on the HDRS); the y-axis could represent time; and the z-axis could represent the severity. With this analysis, we could say that we are not actually making n hypotheses--we are making a single hypothesis, that the multifactorial pattern of symptom results, manifest as a 3-d surface, is changing over time. Each individual patient's symptom changes, in every symptom category, could be represented on the graph. In this way, no data, or analytic possibility, would be lost or diluted. The reader would be able to inspect every part of the data from the study, and perhaps notice interesting relationships which the original researchers had not considered.

Some patterns of change with different treatment could present in the following ways, as shown in such as 3-d surface:
1) some symptoms improve dramatically with time, while others are much slower to change, or don't change at all. In depression treatment studies, sleep or appetite might change very quickly with a potent antihistaminic drug...this would immediately lead to pronounced improvement on the overall HDRS score, but might not be associated with any significant improvement in mood, energy, concentration, etc.
2) some symptoms might improve immediately, but deteriorate right back to baseline or worse after a few weeks or months. Benzodiazepine treatment would produce such as pattern, in terms of sleep or anxiety improvement. A medication which is sedating but addictive might cause rapid HDRS improvement, but only a careful look at individual category changes over a long period of time would allow us to see the addiction/tolerance pattern. Some people drink alcohol to treat their anxiety symptoms -- such a behaviour might rapidly improve their HDRS scores! But of course, the scores would return to worse than baseline within a few weeks or months. And the person would probably have new symptoms and problems on top of their original ones. So, we must be cautious about getting too excited about claims of rapid HDRS change!
3) some treatments might cause a global change in most or all symptoms...this would be the goal of most treatment strategies. Such a pattern would imply that the multi-symptom syndrome (in this case, the "major depressive disorder" construct) is in fact valid, all components of which improving together with a single treatment.
4) some combined treatments might work well together...for example, a treatment which helps substantially with energy or concentration (such as a stimulant), together with a treatment which helps with mood, socialization, optimism, or anxiety (such as psychotherapy, or an antidepressant). These treatments on their own might appear to be equivalent if only the total HDRS score is considered (since each would reduce symptom points overall); the synergistic effect would only be apparent by looking at each symptom domain separately.

Finally, I think it is important to look at very broad, simple indicators of quality of life, or of general improvement. The "CGI" scale is one example, although it is awkward and imprecise in design, and most likely prone to bias.

Quality of life scales are important as well, in my opinion, since they look at overall satisfaction with life, rather than merely a collection of symptoms.

In practice, only a discussion with the person receiving the treatment can really assess whether it is worthwhile to continue the treatment or not. In such a discussion, the subjective pros and cons of the treatment can be weighed. Even if the treatment has had a minimal impact on a rating score, it might be subjectively beneficial to the person receiving it. And even if the treatment has produced large rating score changes, it might not be the person's preference to continue. I suppose the role of a prescriber is mainly to facilitate such a dialog, and contradict the patient's wishes only if the treatment is objectively causing harm.

Wednesday, January 6, 2010

A Gene-Environment-Phenotype Surface


I've been thinking of a way to describe the interaction between genes, environment, and phenotype qualitatively as a mathematical surface.

In this model, the x-axis would represent the range of genetic variation relevant to a given trait. If it was a single gene, the x-axis could represent all existing gene variants in the population. Or, the idea could be extended such that the x-axis could represent all possible variants of the gene (including the absence of the gene, represented as "negative infinity" on the x-axis). The middle of the x-axis (x=0) would represent the average expression of the relevant gene in the population.

The y-axis would represent the range of environmental variation relevant to a given trait. y=0 would represent the average environmental history in the population. y="negative infinity" would represent the most extreme possible environmental adversity. y="positive infinity" would represent the most extreme possible environmental enrichment.

The z-axis would represent the phenotype. For example, it could represent height, IQ, extroversion, conscientiousness, etc.

In my opinion, current expressions of "heritability" represent something like the partial derivative dz/dx at x=0 and y=0; or perhaps, since the calculation is based on a population sample, heritability would be the average of derivatives dz/dx over various sampled (x,y) points near x=0 and y=0.

Conventional heritability calculations give a severely limited portrait of the role of genes on phenotype, since it condenses the information from what is really a 3-dimensional surface into a single number (the heritability). This is like looking at a sculpture, then being told that the sculpture can be represented by a single number such as "0.6", based on the average tilt on the top centre of the artwork.

A more comprehensive idea of heritability would be to consider that it is the gradient, a component of which is dz/dx. This gradient would not be a fixed quantity, but could be considered a function of x and y.

It is particularly interesting to me to consider other properties of this surface, such as what is the derivative dz/dy at different values of y and x? This would determine the ease with which environmental change could change a phenotype regardless of genotype.

A variety of different shapes for this surface could occur:

1) z could plateau (asymptotically) as y approaches infinity. This implies that the phenotype could not be changed beyond a certain point, regardless of the degree of environmental enrichment.
2) z could appear to plateau as y increases, but this is only because we do not yet have existing environments y>p, where p is the best current enriched environment. It may be that z could increase substantially at some point y>j, where j>p. I believe this is the case for most medical and psychiatric problems. It implies that we must develop better environments. Furthermore, it may be that for some genotypes (values of x), z plateaus as y increases, but for other genotypes z changes more dynamically. This implies that some people may inherit greater or lesser sensitivity to environmental change.
3) dz/dx could be very high near the origin (x,y)=(0,0), leading to a high conventional estimate of heritability; but at different values of (x,y), dz/dx could be much smaller. Therefore, it may be that for some individual genomes or environmental histories, genetic effects may be much less relevant, despite what appears to be "high heritability" in a trait.
4) dz/dx could be very low near the origin, but much higher at other values of (x,y). Therefore, despite conventional calculations of heritability being low, there could be substantial genetic effects on phenotype for individuals with genotypes or environmental histories which are farther from the population mean.

The idea of x itself being fixed in an individual may also not be entirely accurate, since we now know of epigenetic effects. Also, evolving technology may allow us to change x therapeutically.

In order to describe such a "surface", many more data points would need to be analyzed, and some of these might be impossible to obtain in the current population.

But I think this idea might qualitatively improve our understanding of gene-environment interaction, in ways that could have practical applications (current heritability estimates are typically 0.5 for almost anything you can think of--this fact seems intuitively obvious, but is not very helpful to inspire therapy or change, can sometimes increase a person's sense of resignation about the possibility of therapeutic change, and can distort understanding about the relative impacts of genes and non-genetic environment).

Thursday, October 29, 2009

Spread of psychological phenomena in social networks

Here is a link to the abstract of an interesting article by Fowler & Christakis, published in the British Medical Journal in December 2008:
http://www.ncbi.nlm.nih.gov/pubmed/19056788

I think it is a delightful statistical analysis of social networks, based on a cohort of about 5000 people from the Framingham Heart Study, followed over 20 years. This article should really be read in its entirety, in order to appreciate the sophistication of the techniques.

They showed that happiness "spreads" in a manner analogous to contagion. Having happy same-sex friends or neighbours who live nearby, increases one's likelihood of being, or becoming, happy. Interestingly, spouses and coworkers did not have a pronounced effect.

Also, the findings show that having "unhappy" friends does not cause a similar increase in likelihood of being or becoming "unhappy" -- it is happiness, not unhappiness, in the social network, which appears to "spread."

So the message here is not that people should avoid unhappy friends: in fact the message can be that befriending an unhappy person can be helpful not only to that unhappy individual, but to that unhappy person's social network.

There has been some criticism of the authors' techniques, but overall I find the analysis to be very thorough, imaginative, and fascinating.

Here are some practical applications suggested by these findings:

1) sharing positive emotions can have a substantial positive, lasting emotional impact on people near you, including friends and neighbours.
2) nurturing friendships with happier people who live close to you may help to improve subjective happiness
3) this does not mean that friendships with unhappy people have a negative emotional impact, unless all of your friendships are with unhappy people.
4) in the treatment of depression, consideration of the health of social networks can be very important. Here, the "quantity" of the extended social network is not relevant (so the number of "facebook friends" doesn't matter). Rather, the relevant effects are due to the characteristics of the close social network, of 2-6 people or so, particularly those who have close geographic proximity. As I look at the data, I see that having two "happy friends" has a significantly larger positive effect than having only one, but there was not much further effect from having more than two.
5) I have to wonder whether the value of group therapy for depression is diminished if all members of the group are severely depressed. I could see group therapy being much more effective if some of the members were in a recovered, or recovering, state. This reminds me of some of the research about social learning theory (see my previous post: http://garthkroeker.blogspot.com/2008/12/social-learning-therapy.html)
6) on a public health level, the expense involved in treating individual cases of depression should be considered not only on the basis of considering that individual's improved health, function, and well-being, but also on the basis of considering that individual's positive health impact on his or her social network.
7) There is individual variability in social extroversion, or social need. Some individuals prefer a very active social life, others prefer relative social isolation. Others desire social activity, but are isolated or socially anxious. Those who live in relative social isolation might still have a positive reciprocal experience of this social network effect, provided that relationships with people living nearby (such as next-door neighbours or family) are positive.

I should conclude that, despite the strength of the authors' analysis, involving a very large epidemiological cohort, my inferences and proposed applications mentioned above could only really be proven definitively through randomized prospective studies. Yet, such studies would be virtually impossible to do! I think some of the social psychology literature attempts to address this, but I think manages to do so only in a more limited and cross-sectional manner.

Tuesday, October 27, 2009

Positive Psychology (continued)

This is a response to a reader's comment on my post about positive psychology:
http://garthkroeker.blogspot.com/2009/10/positive-psychotherapy-ppt-for.html

Here's a brief response to some of your points:

1) I don't think there's anything wrong with focusing on pathology or weaknesses. In fact, I consider this type of focus to be essential. Imagine an engineering project in which structural weaknesses or failures were ignored, with a great big smile or a belief that "everything will be fine." Many a disaster has resulted from this kind of approach. I think of the space shuttle disaster, for example.

The insight from positive psychology though, in my opinion, has to do with re-evaluating the balance between a focus on "positivity" vs. pathology.

In depressive states, the cognitive stance is often overwhelmingly critical, about self, world, and future. Even if these views are accurate, they tend to prevent any solution of the problem they describe. It is like an engineering project where the supervisor is so focused on mistakes and criticism that no one can move on, all the workers are tired and demoralized, and perhaps the immediate, relentless focus on errors prevents a different perspective, and a healthy collaboration, which might actually definitively solve the problem.

2) I believe that pronouncements of the "right or wrong" of an emotional or intellectual position are finally up to the individual. It is not for me, or our culture, to judge. There will be all sorts of points of view about the morality or acceptability of any emotional or social stance: some of these points of view will be very critical or judgmental to a given person, some won't. I suppose there are elements of the culture that would harshly judge or criticize someone who appears too "happy": perhaps such a person would be deemed shallow, delusional, uncritical, vain, etc. I prefer to view ideas such as those in "positive psychology" as possible instruments of change, to be tried if a person wishes to try them. CBT, medications, psychoanalysis, surgery, having "negative friends" or "ditching them", etc. are all choices, change behaviours, or ways of managing life, which I think individuals should be free to consider if available, and if legal, but also free to reject if they feel it is not right for them.

In terms of the "gimmicky" nature of positive psychology, I agree. But I think most of the ideas are very simple, and are reflected in other very basic, widely accepted research in biology & behaviour. In widely disparate fields, such as the study of child-rearing, education, coaching, or animal training, it is clear that recognition and criticism of "faults" or "pathologies" is necessary in order for problems to be resolved. Yet the mechanism by which change most optimally occurs is by instilling an atmosphere of warmth, reward, comfort, and joy, with a minority of feedback having to do with criticism. The natural instinct with problematic situations, however, is often to punish. Punishing a child for misbehaviour may at times be necessary, but most times child punishments are excessive and ineffectual, often are more about the emotional state of the punisher rather than the behavioural state of the child, and ironically may reinforce the problems the child is being punished for. Punishing a biting dog through physical injury will teach the dog to be even more aggressive. I find this type of cycle prominent in depressive states: there may be a lot of internal self-criticism (some of which may be accurate), but it leads to harsh self-punishment which ends up perpetuating the depressive state. I find the best insights of "positive psychology" have to do with stepping out of this type of punitive cycle, not by ignoring the negative, but by deliberately trying to nurture and reward the positive as well.

3) The research about so-called "depressive realism" has always seemed quite suspect to me. In a person with PTSD (a disorder which I consider highly analogous to depression and other mental illnesses), very often there is a high degree of sensitivity to various stimuli, that may, for example, cause that person to be able to have better vigilance regarding the potential dangers associated with the sound of footsteps in the distance, or of the smell of smoke, etc. Often times, though, this heightened vigilance comes at great expense to that person's ability to function in life: a pleasant walk, a work environment, or a hug, may instead become a terrifying journey or a place of constant fear of attack.

Similarly, in depressive states, there may be beliefs that are, on one level, accurate, but on another level are causing a profound impairment in life function (e.g. regarding socializing, learning, work, simple life pleasures, spirituality, etc.).

With regard to science, I do not find any need to say that "positive psychology" etc. is about a biased interpretation of data. Instead, my analogy would be along the lines of how one would solve a complex mathematical equation:
-a small minority of mathematical problems have a straightforward answer. If one was to look only at precedents in data, one might conclude that there is no definable answer for many problems. A cynical and depressive approach would be to abandon the problem.
-but most complex problems today require what is called a "numerical analysis" approach. This necessitates basically guessing at the solution, then applying an algorithm that will "sculpt" the guess closer to the true answer. Sometimes the algorithm doesn't work, and the attempted solutions "diverge." But the convergence to a solution through numerical analytical methods is the most powerful phenomenon in modern science. It has permitted most every single major advance in science and engineering in the past hundred years. It is basically analogous to positive behavioural shaping in psychology. It is not about biased interpretation of data, it is about using a set of "positive" tools to solve a problem (in the mathematical case, to get numerical solutions; in the psychological case, to relieve symptoms, to increase freedom of choice, and to expand the realm of possible life functions available).

4) Some of the experiments are weak, no doubt about that. I don't consider experiments evaluating superficial cross-sectional affect to be relevant to therapy research. Experiments which evaluate the change in symptoms and subjective quality of life measures over long periods of time, are most relevant to me. I consider "positive psychology" to be just one more set of ideas that may help to improve quality of life, and overall life function, as subjectively defined by a patient.

In my discussion of this subject, I am not meaning to suggest that so-called "positive psychology" is my favoured therapeutic system. Some of the ideas may be quite off-putting to individuals who may need to deal with a lot of negative symptoms directly before doing "positivity exercises." But I do think that some of the ideas from positive psychology are important and relevant, and deserve to be adopted as part of an eclectic therapy model.

Monday, October 19, 2009

The Importance of Two-Sided Arguments

This is a topic I was meaning to write a post about for some time. I encountered this topic while doing some social psychology reading last year, and it touches upon a lot of other posts I've written, having to do with decision-making and persuasion. It touches on the huge issue of bias which appears in so much of the medical and health literature.

Here is what some of the social psychology research has to say on this:

1) If someone already agrees on an issue, then a one-sided appeal is most effective. So, for example, if I happen to recommend a particular brand of toothpaste, or a particular political candidate, and I simply give a list of reasons why my particular recommendation is best, then I am usually "preaching to the converted." Perhaps more people will go out to buy that toothpaste brand, or vote for that candidate, but they would mostly be people who would have made those choices anyway. The only others who would be most persuaded by my advice would be those who do not have a strong personal investment or attachment to the issue.

2) If people are already aware of opposing arguments, a two-sided presentation is more persuasive and enduring. And if people disagree with a certain issue, a two-sided presentation is more persuasive to change their minds. People are likely to dismiss as biased a one-sided presentation which disagrees with their point of view, even if the presentation contains accurate and well-organized information. This is one of my complaints about various types of media and documentary styles: sometimes there is an overt left-wing or right-wing political bias that is immediately apparent, particularly to a person holding the opposing stance. I can think of numerous examples in local and international newspapers and television. The information from such media or documentary presentations would therefore have little educational or persuasive impact except with individuals who probably agree with the information and the point of view in advance. The strongest documentary or journalistic style has to be one which presents both sides of a debate, otherwise it is probably almost worthless to effect meaningful change--in fact it could entrench the points of view of opposing camps.


It has also been found that if people are already committed to a certain belief or position, than a mild attack or challenge of this position causes people to strengthen their initial position. Ineffective persuasion may "inoculate" people attitudinally, causing them to be more committed to their initial positions. In an educational sense, children could be "inoculated" against negative persuasion, such as from television ads or peer pressure to smoke, etc. by exploring, analyzing, and discussing such persuasive tactics, with parents or teachers.

However, such "inoculation" may be an instrument of attitudinal entrenchment and stubbornness: a person who has anticipated arguments against his or her committed position is more likely to hold that position more tenaciously. Or an individual who has been taught a delusional belief system may have been taught the various challenges to the belief system to expect: this may "inoculate" the person against challenging this belief system, and cause the delusions to become more entrenched.

An adversarial justice system reminds me to some degree of an efficient process, from a psychological point of view, to seek the least biased truth. However, the problem here is that both sides "inoculate" themselves against the evidence presented by the other. The opposing camps do not seek "resolution"--they seek to win, which is quite different. Also, the prosecution and the defense do not EACH present a balanced analysis of pro & con regarding their cases. There is information possibly withheld--the defense may truly know the guilt of the accused, yet this may not be shared openly in court. Presumably the prosecution would not prosecute if the innocence of the accused was known for sure.

Here are some applications of these ideas, which I think are relevant in psychiatry:

1) Depression, anxiety, and other types of mental illness, tend to feature entrenched thinking. Thoughts which are very negative, hostile, or pessimistic--about self, world, or future--may have been consolidated over a period of years or decades, often reinforced by negative experiences. In this setting, one-sided optimistic advice--even if accurate-- could be very counterproductive. It could further entrench the depressive cognitive stance. Standard "Burns style" cognitive therapy can also be excessively "rosy", in my opinion, and may be very ineffective for similar reasons. I think of the smiling picture of the author on the cover of a cognitive therapy workbook as an instant turn-off (for many) which would understandably strengthen the consolidation of many chronic depressive thoughts.

But I do think that a cognitive therapy approach could be very helpful, provided it includes the depressive or negative thinking in an honest, thorough, systematic debate or dialectic. That is, the work has to involve "two-sided argument".

2) In medical literature, there is a great deal of bias going on. Many of my previous postings have been about this. On other internet sites, there are various points of view, some of which are quite extreme. Those sites which are invariably about "pharmaceutical industry bias", etc. I think are actually quite ineffectual, if they merely are covering the same theme, over and over again. They are likely to be sites which are "preaching to the converted", and are likely to be viewed as themselves biased or extreme by someone looking for balanced advice. They may cause individuals with an already biased point of view to unreasonably entrench their positions further.

Also, I suspect the authors of sites like this, may themselves have become quite biased. If their site has repeatedly criticized the inadequacy of the research data about some drug intended to treat depression or bipolar disorder, etc., they may be less likely to consider or publish contrary evidence that the drug actually works. Once we commit ourselves to a position, we all have a tendency to cling to that position, even when evidence should sway us.

On the other hand, if there is a site which consistently gives medication advice of one sort or the other, I think it is unlikely to change very many opinions on this issue, except among those who are already trying out different medications.

So, in my opinion, it is a healthy practice when analyzing issues, including health care decisions, to carefully consider both sides of an argument. If the issue has to do with a treatment, including a medication, a style of psychotherapy, an alternative health care modality, or of doing nothing at all, then I encourage the habit of analyzing the evidence in two ways:
1) gather all evidence which supports the modality
2) gather all evidence which opposes it

Then I encourage a weighing, and a synthesis, of these points of view, before making a decision.
I think that this is the most reliable way to minimize biases. If such a system is applied to one's own attitudes, thoughts, values, and behaviours, I think it is the most effective to promote change and growth.



References:
Myers, David. Social Psychology, fourth edition. New York: McGraw-Hill; 1993. p. 275; 294-297.

Friday, October 16, 2009

Social Psychology

Social psychology is a wonderful, enchanting field.

It is full of delightful experiments which often reveal deeply illuminating facets of human nature.
The experiments are usually so well done that it is hard to argue with the results.

Many people in mental health fields, such as psychiatry, have not studied social psychology. I never took a course in it myself. I feel like signing up for one now.

Applications of social psychology research could apply to treating anxiety & depression; resolving conflict; improving morale; reducing violence on a personal or social level; improving family & parental relationships; building social relationships, etc.

My only slight criticism of typical social psychology research is that it tends to be quite cross-sectional, and the effects or conditions studied are most often short-term (i.e. results that could typically be obtained in a study lasting a single afternoon). My strongest interest is in applied psychology, and I believe that immediate psychological effects can be important, but long-term psychological effects are of greatest importance. The brain works this way, on many levels: the brain can habituate to immediate stimuli, if those same stimuli are repeated over weeks or months. Learning in the brain can start immediately, but deeply ingrained learning (akin to language or music learning) takes months or years. So some results from a day-long study may only be as deeply insightful as administering a medication for a single day -- the effects haven't had a chance to accumulate or be subject to habituation.

In any case, I strongly encourage those interested in mental health to read through a current social psychology textbook (examples of these tend to be very well-written, readable, and entertaining), and to consider following the latest social psychology research. The biggest journal in social psychology is the Journal of Personality and Social Psychology.

Tuesday, October 13, 2009

Increasing anxiety in recent decades...continued

This is a sequel to a previous posting (http://garthkroeker.blogspot.com/2009/06/increasing-anxiety-in-recent-decades.html)

A visitor suggested the following July 2009 article to look at regarding this subject--here's a link to the abstract:
http://www.ncbi.nlm.nih.gov/pubmed/19660164

The author, "Ian Dowbiggin, PhD", is a history professor at the University of Prince Edward Island.

I found the article quite judgmental and poorly informed.

I thought there were some good points, exploring the interaction of social dynamics, political factors, secondary gain, etc. in the evolution of diagnostic labels; and perhaps exploring the idea that we may at times over-pathologize normal human experiences, character traits, or behaviours.

But, basically the author's message seems to be that we cling to diagnostic labels to avoid taking personal responsibility for our problems--and that therapists, the self-help movement, pharmaceutical companies, etc. are all involved in perpetuating this phenomenon.

Another implied point of view was that a hundred years ago, people might well have experienced similar symptoms, but would have accepted these symptoms as part of normal life, and carried on (presumably without complaint).

To quote the author:

"The overall environment of modern day life...bestows a kind of legitimacy on the pool of
anxiety-related symptoms"

This implies that some symptoms are "legitimate" and others are not, and that it is some kind of confusing or problematic feature of modern society that anxiety symptoms are currently considered "legitimate."

I am intensely annoyed by opinion papers which do not explore the other side of the issues--

here's another side to the issue:

1) perhaps, a hundred years ago, people suffered just as much, or worse, but lacked any sort of help for what was bothering them. They therefore lived with more pain, less productivity, less enjoyment, less of a voice, more isolation, and in most cases died at a younger age.

2) The development of a vocabulary to describe psychological distress does not necessarily cause more distress. The vocabulary helps us to identify experiences that were never right in the first place. The absence of a PTSD label does not mean that symptoms secondary to trauma did not exist before the 20th century. The author somewhat mockingly suggests that some people misuse a PTSD or similar label--that perhaps only those subject to combat trauma are entitled to use it, while those subject to verbal abuse in home life are not.

The availability of financial compensation related to PTSD has undoubtedly affected the number of people describing symptoms. But the author appears to leave readers with the impression that those seeking compensation via PTSD claims are "milking the system" (this is the subtitle of the PTSD section of this paper). There is little doubt that factitious and malingered symptoms are common, particularly when there is overt secondary gain. And the issue of how therapeutic it is to have long-term financial compensation for any sort of problem, is another matter for an evidence-based and politically charged debate. But to imply that all those who make financial claims regarding PTSD are "milking the system" seems very disrespectful to me. And to imply that a system which offers such compensation is somehow problematic again seems comparable to saying that the availability of fire or theft insurance is problematic. A constructive point of view on the matter, as far as I'm concerned, would be to consider ways to make compensation systems fair and more resistant to factitious or malingered claims.

With regard to social anxiety -- it may well be that "bashfulness" has been valued and accepted in many past--and present--cultures. But I suspect that the social alienation, social frustration, loneliness, and lack of ability to start new friendships, new conversations, or to find mates, have been phenomena similarly prevalent over the centuries. Our modern terminology suggests ways for a person who is "bashful" to choose for himself or herself, whether to stoically and silently accept this set of phenomena, or to address it as a medical problem, with a variety of techniques to change the symptoms. In this way the language can be empowering, leading to the discovery and nurturance of a voice, rather than leading to a sense of "victimhood."

Perhaps the lack of a vocabulary to articulate distress causes a spurious impression that the distress does not exist, or is not worthy of consideration. A historical analogy might be something along the lines of this: terms such as "molecule", "Uranium", or "electromagnetic field," may not have been used before 1701, 1797, or 1820, but this was merely a product of ignorance, not evidence of the non-existence of these phenomena in the 1600's and prior.

It may well be true that many individuals misuse the vocabulary, or may exploit it for secondary gain. And it may well be true that some diagnostic labels introduce an iatrogenic or factitious illness (the multiple personality disorder issue could be debated along these lines). But to imply that the vocabulary itself is harmful to society is akin to saying that fire insurance is harmful, since some people misuse it by deliberately burning their houses down.


3) Similarly, the so-called self-help movement may be part of some individuals fleeing into self-pathologizing language, while ironically neglecting a healthy engagement with their lives. But in most cases, it has actually helped people to recognize, label, and improve their problems. For a start on some evidence to look at regarding this, see the following reference to a meta-analysis on self-help for anxiety disorders: http://www.ncbi.nlm.nih.gov/pubmed/16942965).

---
So, in conclusion, it is interesting to hear a different point of view. But I would expect a distinguished scholar to provide a much more balanced and insightful debate in such a paper, especially when it is published in a journal which is supposed to have high standards.

And I would certainly expect a much more thorough exploration of research evidence. The presence of 35 references in this paper may fool some readers into thinking that a reasonable survey of the research has been undertaken. Almost all of the references are themselves opinion pieces which merely support the author's point of view.

Thursday, October 8, 2009

Is Seroquel XR better than generic quetiapine?

A supplement written by Christoph Correll for The Canadian Journal of Diagnosis (September 2009) was delivered--free--into my office mailbox the other day.

It starts off describing the receptor-binding profiles of different atypical antipsychotic drugs. A table is presented early on.

First of all, the table as presented is almost meaningless: it merely shows the concentrations of the different drugs required to block 50% of the given receptors. These so-called "Ki" concentrations have little meaning, particularly for comparing between one drug and another, UNLESS one has a clear idea of what concentrations the given drugs actually reach when administered at typical doses.

So, of course, quetiapine has much higher Ki concentrations for most receptors, compared to risperidone -- this is related to the fact that quetiapine doses are in the hundreds of milligrams, whereas risperidone doses are less than ten milligrams (these dose differences are not reflective of anything clinically relevant, and only pertain to the size of the tablet needed).

A much more meaningful chart would show one of the following:

1) the receptor blockades for each drug when the drug is administered at typical doses

2) the relative receptor blockade compared to a common receptor (so, for example, the ratio between receptor blockades of H1 or M1 or 5-HT2 compared to D2, for each drug).

The article goes on to explore a variety of other interesting differences between antipsychotics. Many of the statements made were theoretical propositions, not necessarily well-proven empirically. But in general I found this discussion valuable.

Despite apparent efforts for the author to be fair and balanced regarding the different antipsychotics, I note a few things:

1) there are two charts in this article showing symptom improvements in bipolar disorder among patients taking quetiapine extended-release (Seroquel XR).

2) one large figure appears to show that quetiapine has superior efficacy in treating schizophrenia, compared to olanzapine and risperidone (the only "p<.05 asterisk" was for quetiapine!) -- this figure was based on a single 2005 meta-analysis, published in a minor journal, before the CATIE results were published. No other figures were shown based on more recent results, nor was clozapine included in any figure.

I think quetiapine is a good drug. BUT -- I don't see any evidence that quetiapine extended release is actually any better, in any regard, than regular quetiapine. In fact, I have seen several patients for whom regular quetiapine suited them better than extended-release, and for whom a smaller total daily dose was needed.

Here is a reference to one study, done by Astra-Zeneca, comparing Seroquel with Seroquel XR, in healthy subjects: http://www.ncbi.nlm.nih.gov/pubmed/19393840 It shows that subjects given regular quetiapine were much more sedated 1 hour after dosing, compared to those given the same dose of Seroquel XR. It implies that the extended release drug was superior in terms of side-effects. Here is my critique of this study: first of all, sedation is often a goal in giving quetiapine, particularly in the treatment of psychosis or mania. Secondly, problematic sedation is usually the type that persists 12 hours or more after the dose, as opposed to one hour after the dose. In this study, the two different formulations did not differ in a statistically significant way with respect to sedation 7, 8 or 14 hours after dosing. In fact, if you look closely at the tables presented within the article, you can see that the Seroquel XR group actually had slightly higher sedation scores 14 hours after dosing. Thirdly, dosing of any drug can be titrated to optimal effect. Regular quetiapine need not be given at exactly the same dose as quetiapine XR--to give both drugs at the same dose, rather than at the optimally effective dose for each, is likely to bias the results greatly. Fourth, this study lasted only 5 days for each drug ! In order to meaningfully compare effectiveness or side-effects between two different drugs, it is necessary to look at differences after a month, or after a year, of continuous treatment. For most sedating drugs, problematic sedation diminishes after a period of weeks or months. Once again, if immediate sedation is the measure of side-effect adversity, then this study is biased in favour of Seroquel XR. Fifth, the study was done in healthy subjects who did not have active symptoms to treat. This reminds me of giving insulin to non-diabetic subjects, and comparing the side-effects of the different insulin preparations: the choice of population is an obvious strong bias!


Regular quetiapine has gone generic.

Quetiapine extended-release (Seroquel XR) has not.

I am bothered by the possibility of bias in Correll's article.

It is noted, in small print at the very end of this article, that Dr. Correll is "an advisor or consultant to AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly, Organon, Ortho McNeill-Janssen, Otsuka, Pfizer, Solvay, Supernus, and Vanda." AstraZeneca is the company which manufactures Seroquel XR.

In conlusion, I agree that there are obviously differences in receptor binding profiles between these different drugs. There are some side-effect differences.

Differences in actual effectiveness, as shown in comparative studies, are minimal. But probably olanzapine, and especially clozapine, are slightly better than the others, in terms of symptom control.

Quetiapine can be an excellent drug. Seroquel XR can be an excellent formulation of quetiapine, and might suit some people better.

BUT -- there is no evidence that brand-name Seroquel XR is superior to generic regular quetiapine.

One individual might respond better to one drug, compared to another.

The author, despite including 40 references, seems to have left out many important research studies on differences between antipsychotics, such as from CATIE and SOHO.

(see my previous post on antipsychotics: http://garthkroeker.blogspot.com/2008/12/antipsychotic-medications.html )

Monday, October 5, 2009

The need for CME

Here's another article from "the last psychiatrist" on CME:
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more

Another insightful article, but pretty cynical!

But here are some of my opinions on this one:

1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.

I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.

Biased Presentation of statistical data: LOCF vs. MMRM

This is a brief posting about biostatistics.

In clinical trials, some subjects drop out.

The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.

LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.

One technique or the other may generate different conclusions, different numbers to present.

The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:

http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more

While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0

In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).

Which is better, a simple drug or a complex drug?

Here is another critique of medication marketing trends in psychiatry:

http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more

I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps

I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).

These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.

By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.

Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).

I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.

So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.

Pregnancy & Depressive Relapse

I was looking at an article in JAMA from 2006, which was about pregnant women taking antidepressants. They were followed through pregnancy, and depressive relapses were related to changes in antidepressant dose. Here's a link to the abstract:

http://www.ncbi.nlm.nih.gov/pubmed/16449615

The study is too weakly designed to allow strong conclusions. Yet the abstract makes a statement about "pregnancy not being protective" which--while possibly true--is not directly related to the findings from the study. This criticism was wisely conceived by the author of "The Last Psychiatrist" blog:
http://thelastpsychiatrist.com/2006/10/jama_deludes.html

Yet the JAMA study is not uninformative.

And the criticism mentioned above goes a bit too far, in my opinion. The critique itself makes overly strong statements in its own title & abstract.

It appears quite clear that pregnant women with a history of depressive illness, who are taking antidepressants, but decrease or discontinue their medication during the pregnancy, have a substantially higher risk of depressive relapse.

Because the study was not randomized, we cannot know for sure that this association is causal. But causation would be reasonably suggested. It does not seem likely that this large effect would have been caused by women whose "unstable" depressive symptoms led them to discontinue their antidepressants (i.e. it does not seem likely to me that "reverse causation" would be a prominent cause for this finding). I think this could happen in some cases, but not frequently. Nor does it seem likely to me that a woman already taking an antidepressant, who becomes more depressed during the pregnancy, would therefore stop taking her medication. This, too, could happen (I can think of clinical examples), but I don't think it would be common. It seems most likely to me that the causation is quite simple: stabilized depressive illness during pregnancy is likely to become less stable, and more prone to relapse, if antidepressant medication is discontinued.

The critique of this article also discusses the fact that women in the study who increased their doses of medication also had higher rates of depressive relapse, yet this fact is not mentioned very much in the abstract or conclusion. This finding is also not surprising--what other reason would a pregnant woman have to increase a dose of medication which she was already taking during her pregnancy, other than an escalation of symptoms? In this case, depressive relapse (which can happen despite medication treatment) is likely the cause of the increased dose--the increased dose is unlikely to have caused the depressive relapse.

Yet, as I said above, the study only allows us to infer these conclusions, as it was not randomized. And I agree that the authors overstate their conclusions in the abstract. In order to more definitively answer these questions, a randomized prospective study would need to be done.

Friday, September 25, 2009

Randomized Controlled Trials in psychiatry

There is a good debate presented in the September 2009 issue of the Canadian Journal of Psychiatry (pp. 637-643), about the importance of randomized controlled trials in psychiatric research and clinical practice.

Steven Hollon presents a strong case supporting the philosophical foundations of RCT research, while Bruce Wampold presents many good points about the present limitations and weaknesses prevalent in current psychiatric RCT research studies. In particular, Wampold points out that much evidence exists regarding the relevance of the individual therapist (and, I might add, of the individual sense of patient-therapist alliance or connection) in determining therapeutic outcomes, and that this very individual factor may have a stronger influence on outcome than the particular "treatment" being offered (whether it be CBT, psychoanalysis, a medication combination, etc.).

My own view of a lot of the evidence resonates with these ideas. I strongly support the importance of randomized controlled trials in medicine and psychiatry. Yet it often seems to me that many variables are not accounted for. The impact of the individual therapist is one specific factor. If the patient is more comfortable with one therapist than another, than this factor alone may greatly outweigh the effect of the particular style of therapy being offered. Interestingly, this factor may not necessarily depend on the length of experience of the therapist -- sometimes a trainee may have a more positive therapeutic impact than a therapist who has decades of experience. This fact is not surprising to me: a lot of psychotherapy can have a lot to do with the capacity for the therapeutic relationship to grow and be healthy, which may depend substantially on very personal factors in the therapist. This may be humbling to those of us who revere the notion of psychotherapeutic theory being of paramount importance.

The whole of psychiatric theory may, at least in some cases, be less important than the goodness of a single interpersonal connection.

But I do also believe that certain therapeutic techniques are more effective than others. I think that strategies which promote daily long-term psychological work just have to be more effective (along the lines of language learning again). Also I think that strategies which encourage and help a person to face their fears or to move away from destructive habits are more likely to be helpful than strategies which do not look at these issues.

Many other factors are often not controlled (or examined at all) in present psychiatric RCTs, including nutrition, exercise, other self-care activities, supportive relationship involvement, community involvement, altruistic activity, etc.

Another factor that I have considered is the heterogeneity of many studied psychiatric populations. Different individuals with so-called "major depressive disorder" may in fact have different underlying causes for their symptoms; some of these individuals may respond well to one type of treatment, others may respond to something else. I suppose the RCT design remains appropriate in this situation, yet a powerful focus in research, in my opinion, needs to be to examine why some people respond to something, while others don't.

This erratic pattern of response doesn't just happen with individuals in a particular study. There are whole studies in which a well-proven psychiatric treatment (such as an antidepressant) doesn't end up differing from placebo. I don't think such studies show that antidepressants (or other treatments) are ineffective, but I do think it strongly suggests that the current criteria for psychiatric diagnoses are insufficient to predict treatment response as consistently as we need.
Often times, these negative studies are dismissed automatically. In many cases, such studies have been poorly designed, and that is the main problem. But in other cases, I think we need to very carefully examine such negative studies, to understand why they were negative.

This is consistent with another type of scientific rigor (different from the RCT empirical approach): in mathematics, a single counterexample is sufficient to disprove a theorem. If such a counterexample is found, it can be extremely fruitful to examine why it occurred--in this way a new and more valuable theorem can be conceived. The process of generating the disproven theorem was not a waste of time, but could be understood as part of a process to find the accurate theorem. Such examples abound in other fields, such as computer programming--a program or algorithm may work quite well, but generate errors or break down in certain situations. Careful examination of why the errors are taking place is the only way to improve the program, and perhaps also to more deeply understand the problem the program was supposed to solve.

Tuesday, June 16, 2009

"Micronutrient Treatment"

There are examples of "micronutrient treatments" being marketed to help various mental health problems.

These treatments may be marketed aggressively: there may be slick internet sites, perhaps with an enthusiastic following of people who believe strongly in the product.

If the manufacturer of such a product is quoting "research studies," I encourage you to look carefully at the studies referred to. If you are seriously considering products of this type, I would suggest looking at the articles in their entirety at a library.

I encourage anyone interested in pursuing treatments of this sort to ask the following questions:

1) What type of evidence exists regarding effectiveness & safety? Is the evidence from large, double-blinded, randomized, controlled studies conducted by researchers who do not have financial connections with the manufacturer?

2) Is the research pertaining to the product published in a journal with high scientific standards? (In order to answer this question for yourself, I would invite you to leaf through numerous issues of the journal, and compare this with an independent, peer-reviewed journal such as Lancet or The New England Journal of Medicine).

2) Is the evidence mainly from enthusiastic testimonial accounts or case studies? Is this type of evidence reliable enough for you?

3) How much money is required to purchase the treatment? Does the manufacturer encourage you to involve yourself in a long-term financial commitment?

4) After acquainting yourself with common sales and marketing tactics (for a primer on this subject, see Robert Cialdini's book, The Psychology of Persuasion), do you see evidence of highly persuasive or biased sales tactics being used in the marketing of the product? Are vulnerable people being taken advantage of in the marketing of the product?

Have a look at this link, which gives a brief history and overview of charlatanism--being familiar with this history may allow you to make more informed choices about your own medical care:
http://en.wikipedia.org/wiki/Quackery

I do not mean to single out alternative remedies in this post--I encourage the same critical standards to be applied regarding all types of therapy. Mainstream pharmaceutical manufacturers and other providers of mainstream therapies may often be guilty of devious marketing behaviours. In my opinion, though, mainstream pharmaceutical manufacturers have a much harder time getting away with overt charlatanism at this point, compared to many manufacturers of alternative remedies.

Also, I wholeheartedly acknowledge that there can be alternative remedies which are helpful, and which are marketed ethically.

Here in Canada, we live in a free society, with a strong emphasis on freedom of speech. Imposing more strict legal restrictions or regulations upon health choices would limit freedom. I support maintaining a free society, but the presence of charlatanism is one of the costs of this freedom.

Thursday, March 5, 2009

Active Placebo Studies show smaller benefits from Antidepressants

In most of the better clinical studies, a "placebo group" acts as a control. The placebo would consist of something totally inert, such as a capsule with nothing inside, or possibly with a small quantity of a sugar such as lactose.

The idea of an "active placebo" is interesting: in this case, the placebo is an agent shown not to have any beneficial or detrimental effect on the disease in question, but which clearly has side-effects.

An example would be using a tablet of Gravol (dimenhydrinate) as the "placebo". It is not an antidepressant, but it has side-effects (sedation, dry mouth, etc.). In this way, it is a more convincing placebo, since a person taking an agent which produces side effects is more likely to believe that they are taking the "active" agent. If a person is taking a placebo they strongly believe to be a placebo (since it produces no side effects) they are less likely to have any "placebo effect" response, and the whole point of the placebo control will be relatively "unblinded."

There is a body of research literature looking at using "active placebo" vs. antidepressants to treat depression.

http://www.ncbi.nlm.nih.gov/pubmed/9614471

{a 1998 meta-analysis from the British Journal of Psychiatry showing that the effect sizes of antidepressant therapy are only about half as large when compared against an active placebo, rather than an inert placebo}

http://www.ncbi.nlm.nih.gov/pubmed/14974002

{a 2004 Cochrane review with similar findings}

These results support the evidence that antidepressants work -- but they suggest that probably most of the studies overestimate how well they work, because they are measured against inert placebos in most cases.

I think that more clinical studies need to include active placebos.

I post this not to be cynical, or to discourage the use of antidepressants--as you can see from the rest of this blog, I strongly support medication trials to treat psychiatric problems--but I believe that we have to always search for the most accurate, least biased sources of information. We need to be wary of exaggerated claims about the effectiveness of anything, especially since I see in my practice that many of the treatments don't seem to work quite as well as the ads claim they should.

Sunday, November 9, 2008

Biases associated with Industry-funded research

There is evidence that research studies sponsored by pharmaceutical companies produce biased results. Here is a collection of papers supporting this claim:

http://ajp.psychiatryonline.org/cgi/content/full/162/10/1957
This paper from the American Journal of Psychiatry reports that industry-sponsored studies are 4.9 times more likely to show a benefit for their product.

http://www.ncbi.nlm.nih.gov/pubmed/15588746

In this paper, an association is shown between industry involvement in a study, and the study showing a larger benefit for the industry's product (in this case, with newer antipsychotics).

http://bjp.rcpsych.org/cgi/content/full/191/1/82
In this study, the findings suggest that the direct involvement of a drug company employee in the authorship of a study leads to a higher likelihood of the study reporting a favourable outcome for the drug company product.

http://jama.ama-assn.org/cgi/content/full/290/7/921
This is a very important JAMA article, showing that industry-funded studies are more likely to recommend the experimental treatment (i.e. favouring their product) than non-industry studies, even when the data are the same.

I do not publish this post to be "anti-drug company". I think the pharmaceutical industry is wonderful. The wealth of many of these companies may allow them to do very difficult, hi-tech research with the help of some of the world's best scientists. The industry has produced many drugs that have vastly improved people's lives, and that have saved many lives.

Even the profit-driven-ness of companies can be understandable and healthy...it may lead to economic pressure to produce treatments that are actually effective, and that are superior to the products of the competitors.

Sometimes the research trials necessary to show the benefit of newer treatments require such a large scale that they are very expensive...sometimes only a large drug company actually has enough money to sponsor trials of this type.

BUT...the profit-driven orientation of companies may cause them to take short-cuts to maximize profits...
-marketing efforts can distort the facts about effectiveness of a new treatment
-and involvement in comparative trials by eager, profit-driven industry, very likely biases results, and biases the clinical behaviour of doctors

A solution to some of these problems is a requirement for frank transparency always, when publishing research papers, in terms of industry involvement.

Another solution is to have more government funding for independent, unbiased large-scale clinical trials.

And another solution is for all of us to be better informed about this issue!

Tuesday, October 28, 2008

Statistics

Most research findings include a lot of statistical analysis of data, and many of the conclusions or assertions made in research papers are based on the statistical analysis.

This is a major advance in the science of analyzing and interpreting data.

Yet, there are a few complaints I have about the way statistical analyses are reported:

The application of statistics is meant to give the reader a very clear, objective summary of what data show, or what data mean. The spirit is neutral objectivity, without the biases of arbitrary subjective opinion or judgment, of people "eyeballing" the data and concluding there is something meaningful there, when in fact there is not.

Yet, in most statistical summaries of research data, the words "significant" and "not significant" are frequently used. The criterion for "significance", however, is arbitrarily determined. It is part of the research, or the statistical, culture, to consider that a "significant" difference means that the data shows a difference that could be due to random chance only 5% of the time or less. If the data show a difference which could be due to randomness with a probability of 6%, then the difference would be reported as "non-significant".
This is an intrusion of human-generated arbitrariness into what is supposed to be an objective, clear analysis of data.

What I feel is a much more accurate way to report on a statistical analysis in a research paper is the following:

the probability ("P value") of a difference being due to chance, rather than to a real difference, should always be given prominently in the paper, and in the abstract, rather than the words "significant" or "non-significant". The reader can then decide whether the finding is significant or not.

As far as I'm concerned, any P value less than 0.5 (50%) carries some degree of significance to it, and the reader of a paper or abstract deserves to see this value prominently given. And it seems absurd to me that results showing a P value of 0.06 would be deemed "non-significant" while results with a P value of 0.05 would be "significant".

**note: there are more rigorous and precise definitions for the statistical terms above, I use a somewhat simplified definition to make my general point more clear and accessible; I encourage the interested reader to research the exact definitions.

Another thought I've had is that, when it comes to clinical decision-making, "eyeballing" the data-- provided the data are fairly represented (for example, on a clear graph which includes the point {0,0} ) --can often lead to more intuitively accurate interpretations than some kind of numerical statistical summary. There is more information represented visually in a graph than in a single number which summarizes the graph, in the same way that there is more information in a photograph than in a number which summarizes some quality about the photograph.

The biggest advantage of sophisticated statistical summaries lies in optimizing research resources, such that we can re-direct our attention away from treatments that work less well, and focus instead on treatments that work better, particularly if there are limited resources, and if a given treatment could determine survival (or not). Also, if there is abundant data, but little way of understanding the data well, then a good statistical analysis can guide treatment decisions. It may help to choose the best chemotherapy drug for cancer, or the best regimen to manage a heart attack. For depression, though, and perhaps other mental illnesses, the statistical analyses can often add more "fuzziness" and distortion to clinical judgment, unless the reader has a sharp eye to recognize the many sources of bias.

Monday, July 7, 2008

Links to Research

Here are some sites I recommend when researching medical evidence:

1) the U.S. national institute of mental health; their research is funded by the U.S. government:

http://www.nimh.nih.gov/

2) PubMed: this is a medical research database, with access to abstracts, sometimes to the full texts, of research papers. I invite you to go look at the research yourself, directly. I do think it is important to develop a critical eye, though, for the signs of strong vs. weak research evidence (e.g. size of study, randomization, length of follow-up, source of funding, etc.). If you have read a newspaper headline about a research finding, I think it is usually important to go to the primary source, and have a look at the findings yourself. Sometimes the media presentation of the research findings is misleading or incomplete.

http://www.ncbi.nlm.nih.gov/PubMed/