Biases in Research
We are much more familiar these days with biases in pharmaceutical research studies. A clinical study of a medication treatment is more likely to show an exaggerated beneficial effect, if the study is sponsored by the manufacturer. This doesn't mean industry-sponsored research is "bad," and it doesn't mean that pharmaceutical products are "bad," but it does mean that we have to look with a careful, skeptical eye at research results--not just at impressive tables or graphs, but also at the sources of funding for the study, and the authors' past relationships with the manufacturers. There could indeed be overt "badness" if there are examples of flagrant profiteering on the part of people involved. But the more salient issue, in my opinion, is simply the need to question the authority of results from such studies.
This same critical eye is very much needed for looking at research evidence regarding alternative treatments. There are very strong sales tactics used to market supplements, herbal remedies, and other treatments, and the standards of evidence presented are often much lower than those from pharmaceutical studies. For example, simple testimonial accounts are much more common in alternative medication marketing, as are impressive-sounding but clinically irrelevant scientific or pseudo-scientific claims.
We may assume that studies of psychotherapy would be relatively free of these biases. After all, there is no big company that is profiting from psychotherapy!
But we must maintain a critical eye even for studies of psychotherapy. Here are some reasons:
1) A positive study of a psychotherapy technique may not bring obvious financial profit to anyone, but it is likely to increase the prestige of the authors. A big part of the "currency" in a Ph.D. researcher's career relates to impressive publications. A study showing a significant treatment effect of a psychotherapy technique is likely to add to the fame and career advancement of the authors. This career advancement is analogous to direct financial gain.
2) Many psychotherapy researchers have spent many years of study devoted to their therapy technique. Imagine if you had spent 10 years studying a particular thing, and that you had strong feelings about it. You could imagine that you might have a bias in favour of the technique that you had studied all those years. You would really want to show that it works! If a study showed that it didn't work so well, it might lead you to question the value of all those years of your career! In Cialdini's terms, this bias would have to do with "consistency." If someone has been consistently committed to a particular thing for a long time, they are biased to maintain support of that thing, beyond what would otherwise be reasonable. Furthermore, if you had worked all those years studying one particular technique, your social and professional community of peers would be more likely to share similar opinions. You might have frequently attended conferences devoted to your area of specialty. You might have even taught students the technique, who appreciated your help and mentorship. This would lead to Cialdini's "social pressure" effect -- since the people around you support your idea, you will be more likely to hold onto the idea yourself, beyond what would otherwise be reasonable.
3) There is more and more direct financial gain related to therapy techniques. We see a lot of books, self-help guides, paid seminars and workshops, etc. Charismatic marketing, including through publishing of research studies, is likely to increase the financial profit of those involved.
4) In the psychotherapy research community, CBT is the most common modality. CBT is intrinsically easier to research, since it is more easily standardized, the techniques themselves involve a lot of measurement, and the style tends to be more precisely time-limited. CBT is more "scientific" and therefore attracts researchers whose background is more strongly analytical and scientific. There is nothing intrinsically wrong with this , but it leads to more bias in the research. Therapy styles other than CBT are studied less frequently. Therefore there will be fewer positive studies of other styles. This gives the impression that CBT is best. It is not because comparative studies have actually shown it is best. New versions or variations of CBT (with different fancy-sounding names) are also frequently marketed, and often show good results in research, but once again this does not really prove that the techniques are best. The research study becomes an advertising tool for those who have designed the technique.
I do not mean to sound too cynical here...I think that CBT, as well as all other therapy techniques, are interesting, important, and helpful. We should all learn about them, and make use of some of their principles. But I do not think that any one style is necessarily "best." We should not allow biases in research, including simple marketing effects, to cause a large change in our judgment with respect to helping people.
I feel that the more important foundation in trying to help people is spending the time getting to know them, and hearing from the person you are with (whether it be a client, a patient, a family member, or a friend) what type of help they would actually like.
Also, different individual therapists have different personalities, interests, experiences, weaknesses, and skills. I think it is unhealthy for a community of therapists or healers to be pushed into offering a very narrow range of techniques or therapeutic strategies. Instead, I think that the individual talents and strengths of each therapist should be honoured, and there should be room in any health care system to allow for this.