Monday, October 5, 2009

Mediterranean diet is good for your brain

In this month's Archives of General Psychiatry, a study by Sanchez-Villegas et al. is published showing a strong association between lower rates of depression, and consuming a Mediterranean diet (lots of vegetables, fruits, nuts, whole grains, and fish, with low intake of meat, moderate intake of alcohol & dairy, and lots of monounsaturated fatty acids compared to saturated fatty acids). Data was gathered prospectively during a period averaging over 4 years, and was based on following about 10 000 initially healthy students in Spain who reported food intake on questionnaires.

I'll have to look closely at the full text of the article. I'm interested to consider the question of whether the results strongly suggest causation, or whether the results could be due to non-causal association. That is, perhaps people in Spain with a higher tendency to become depressed tend to choose non-Mediterranean diets. Another issue is cultural: the study was done in Spain, where a Mediterranean diet may be associated with certain--perhaps more traditional--cultural or subcultural features, and this cultural factor may then mediate the association with depressive risk.

In any case, in the meantime, given the preponderance of other data showing health benefits from a Mediterranean-style diet, I wholeheartedly (!) recommend consuming more nuts, vegetables, olive oil, fish, whole grains, and fruit; and less red meat.

The need for CME

Here's another article from "the last psychiatrist" on CME:
http://thelastpsychiatrist.com/2009/07/who_should_pay_for_continuing.html#more

Another insightful article, but pretty cynical!

But here are some of my opinions on this one:

1) I think that, without formalized CME documentation requirements, there would be some doctors who would fall farther and farther behind in understanding current trends of practice, current research evidence, etc.
2) In the education of intelligent individuals, I have long felt that process is much more important than content. A particular article with accompanying quiz is bound to convey a certain biased perspective. It is my hope that most professionals are capable of understanding and resisting such biases. In this modern age, I do think that most of us have a greater understanding of bias, of being "sold" something. Anyway, I think that the process of working through such an article is a structure to contemplate a particular subject, and perhaps to raise certain questions or a debate in one's mind about it, to reflect further upon, or to research further, later on. Yet, I agree that there are many psychiatrists who might be more easily swayed in a non-critical manner, by a biased presentation of information. The subsequent quiz, and the individual's high marks on the quiz, become reinforcers for learning biased information.
3) After accurately critiquing a problem, we should then move on and try to work together to make more imaginative, creative educational programs which are stimulating, enjoyable, fair, and as free of bias as possible.

I think this concludes my little journey through this other blog. While interesting, I find it excessively cynical. It reminds me of someone in the back seat of my car continuously telling me--accurately, and perhaps even with some insightful humour--all the things I'm doing wrong. Maybe I need to hear this kind of feedback periodically--but small doses are preferable! Actually, I find my own writing at this moment becoming more cynical than I want it to be.

Opinions on mistakes psychiatrists make

Here's another interesting link from "the last psychiatrist" blog:

http://thelastpsychiatrist.com/2006/11/post_2.html#more


I agree with many of his points.

But here are a few counterpoints, in order:

1.) I think some psychiatrists talk too little. There's a difference between nervous or inappropriate chatter diluting or interrupting a patient's opportunity to speak, and an engaged dialog focusing on process or content of a problem. There is a trend in psychiatric practice, founded or emphasized by psychoanalysis, that the therapist is to be nearly silent. Sometimes I think these silences are unhelpful, unnecessary, inefficient, even harmful. There are some patients I can think of for whom silence in a social context is extremely uncomfortable, and certainly not an opportunity for them to learn in therapy. Therapy in some settings can be an exercise in meaningful dialog, active social skills practice, or simply a chance to converse or laugh spontaneously.

I probably speak too much, myself--and I need to keep my mouth shut a little more often. I have to keep an eye on this one.

It is probably better for most psychiatrists to err on the side of speaking too little, I would agree. An inappropriately overtalkative therapist is probably worse than an inappropriately undertalkative one. But I think many of us have been taught to be so silent that we cannot be fully present, intuitively, personally, intellectually, to help someone optimally. In these cases, sometimes the tradition of therapeutic silence can suppress healthy spontaneity, positivity, and humour in a way which only delays or obstructs a patient's therapy experience.

2) I agree strongly with this one--especially when history details are ruminated about interminably during the first few sessions.
However, I do think that a framework to be comprehensive is important. And sometimes it is valuable, in my opinion, to entirely review the whole history, after seeing a patient for a year, or for many years. There is so much focus on comprehensive history-taking during the first few sessions, or the first hour, that we forget to revisit or deepen this understanding after knowing a patient much better, later on. Sometimes whole elements of a patient's history can be forgotten, because they were only talked about once, during the first session.

There is a professional standard of doing a "comprehensive psychiatric history" in a single interview of no longer than 55 minutes. There may even be a certain bravado among residents, or an admiration for someone who can "get the most information" in that single hour. I object to this being a dogmatic standard. A psychiatric history, as a personal story, may take years to understand well, and even then the story is never complete. It can be quite arrogant to assume that a single brief interview (which, if optimal exchange of "facts" is to take place, can sound like an interrogation) can lead to a comprehensive understanding of a patient.

I do believe, though, that certain elements of comprehensiveness should be aimed for, and aimed for early. For example, it is very important to ask about someone's medical ailments, about substance use, about various symptoms the person may be too embarrassed to mention unless asked directly, etc. Otherwise an underlying problem could be entirely missed, and the ensuing therapy could be very ineffective or even deleterious.

Also, some individual patients may feel a benefit or relief to go through a very comprehensive historical review in the first few sessions, with the structure of the dialog supplied mainly from the therapist. Other individual patients may feel more comfortable, or find it more beneficial, to supply the structure of their story themselves. So maybe it's important not to make strong imperative statements on this question: as with so many other things in psychiatry, a lot depends on the individual situation.

3. I think it's important not to ignore ANY habitual behavior that could be harmful. Yet perhaps some times are better than others to address or push for things like smoking or soft-drink cessation: a person with a chronically unstable mood disorder may require improved mood stability (some of which may actually come from cigarette smoking, in a short-term sense anyway), before they are able to embark on a quit-smoking plan.

4. not much to add here
5. Well, point taken. I've written a post about psychiatry and politics before, and suggested a kind of detached, "monastic role." But on the other hand, any person or group may have a certain influence--the article here suggests basically that it's none of psychiatry's business to deal with political or social policy. Maybe not. But the fact is, psychiatry does have some influence to effect social change. And, in my opinion, it is obvious that social and political dynamics are driven by forces that are similar to the dynamics which operate in a single family, or in an individual's mind. So, if there is any wisdom in psychiatry, it could certainly be applicable to the political arena. Unfortunately, it appears to me that psychiatrists I have seen getting involved in politics or other group dynamics are just as swept up in dysfunctional conflict, etc. as anyone else.
But if there's something that psychiatry can do to help with war or world hunger, etc. -- why not? In some historic situations an unlikely organized group has come to the great aid of a marginalized or persecuted group in need of relief or justice, even though the organized group didn't necessarily have any specialized knowledge of the matter they were dealing with.

6. I strongly agree. I prefer to offer therapy to most people I see. And I think most people do not have adequate opportunities to experience therapy. Yet I do also observe that many individuals could be treated with a medication prescribed by a gp, and simply experience resolution of their symptoms. Subsequent "therapy" is done by the individual in their daily life, and does not require a "therapist." In these cases, the medication may not be needed anymore, maybe after a year or so. Sometimes therapists may end up offering something that isn't really needed, or may aggrandize the role or importance of "therapy" (we studied all those years to learn to be therapists, after all--therefore a therapist's view on the matter may be quite biased), when occasionally the best therapy of all could simply be self-provided. Yet, of course, many situations are not so simple at all, and that's where a therapy experience can be very, very important. I support the idea of respecting the patient's individual wishes on this matter, after providing the best possible presentation of benefits and risks of different options. Of course, we're all biased in how we understand this benefit/risk profile.
7. some interesting points here...but subject to debate. Addressing these complex subjects in an imperative manner makes me uncomfortable.
8. polypharmacy should certainly not be a norm, though intelligent use of combination therapies, in conjunction with a clear understanding of side-effect risks, can sometimes be helpful. Some of the statements made in this section have actually not been studied well, for example it makes no pharmacological sense to combine two different SSRI antidepressants at the same time. But there has not been a body of research data PROVING that such a combination is in fact ineffectual. Therefore, before we scoff at the practitioner who prescribes two SSRIs at once, I think we should look at the empirical result--since there are no prospective randomized studies, the best we can do is see whether the individual patient is feeling better, or not.
9. I'm not a big fan of "diagnosis", but sometimes, and for some individuals, it can be part of a very helpful therapy experience, to be able to give a set of problems a name. This name, this category, may lead the person to understand more about causes & solutions. Narrative therapy makes a good use, I think, of "naming" (a variant of "diagnosing") as a very useful therapeutic construct.

10. There isn't a number 10 here, but the comments at the end of this article were good.

Biased Presentation of statistical data: LOCF vs. MMRM

This is a brief posting about biostatistics.

In clinical trials, some subjects drop out.

The quality of a study is best if there are few drop-outs, and if data continues to be collected on those who have dropped out.

LOCF and MMRM are two different statistical approaches to dealing with study populations where some of the subjects have dropped out.

One technique or the other may generate different conclusions, different numbers to present.

The following article illustrates how these techniques can skew the presentation of data, and therefore change our conclusions about an issue, despite nothing "dishonest" taking place:

http://thelastpsychiatrist.com/2009/06/its_not_a_lie_if_its_true.html#more

While I agree with the general point of the above article, I find that the specific example it refers to is not necessarily more biased: as I research the subject myself, I find that LOCF is not necessarily superior to MMRM, although LOCF is the most commonly used method to deal statistically with drop-outs. The following references make a case that MMRM is less biased than LOCF most of the time (although it should be known that whenever there are any drop-outs which are lost to follow-up, the absence of data on these subjects weakens the study results--it is important to consider this issue closely when reading a paper):
http://www.stat.tamu.edu/~carroll/talks/locfmmrm_jsm_2004_rjc.pdf
http://www3.interscience.wiley.com/journal/114177424/abstract?CRETRY=1&SRETRY=0

In conclusion, I can only encourage readers of studies to be more informed about statistics. And, if you are looking at a study which could change your treatment of an illness, then it is important to read the whole study, in detail, if possible (not just the abstract).

Which is better, a simple drug or a complex drug?

Here is another critique of medication marketing trends in psychiatry:

http://thelastpsychiatrist.com/2009/04/how_dangerous_is_academic_psyc_1.html#more

I agree quite strongly that there has been a collusion between:
- psychiatrists who eagerly yearn to meaningfully apply their knowledge of psychopharmacology, pharmacokinetics, neurotransmitter receptor binding profiles, etc. (to justify all those years of study)
- and pharmaceutical company sales reps

I can think of attending many academic rounds presentations in which a new drug would be discussed, for example a newly released SSRI. During the talk, there would be boasting about how the new drug had the highest "receptor specificity", or had the lowest activity at receptors other than those for serotonin (e.g. for histamine or acetylcholine).

These facts that I was being shown, while enjoying my corporate-sponsored lunch, were true. But they were used as sales tactics, by-passing clear scientific thought. Just because something is more "receptor-specific" doesn't mean that it works better! It may in some cases be related to a difference in side effects. Yet sometimes those very side-effects may be related to the efficacy of the drug.

By way of counter-example, I would cite the most effective of all antipsychotic medications, clozapine. This drug has very little "receptor-specificity." It interacts will all sorts of different receptors. And it has loads of side effects too. Perhaps this is part of the reason it works so well. Unfortunately, this does not sit well with those of us who yearn to explain psychiatric medication effects using simple flow charts.

Similarly, the pharmacokinetic differences between different medications are often used as instruments of persuasion--yet often times they are either clinically irrelevant, of unproven clinical relevance, or even clinically inferior (e.g. newer SSRI antidepressants have short half-lives, which can be advantageous in some regards; but plain old Prozac, with its very long half-life, can be an excellent choice, because individuals taking it can safely skip a dose without a big change in the serum level, and ensuing side-effects).

I should not be too cynical here -- it is important to know the scientific facts that can be known about something. Receptor binding profiles and half-lives, etc. are important. And it can be useful to find medications that have fewer side-effects, because of fewer extraneous receptor effects. The problem is when we use facts spuriously, or allow them to persuade us as part of someone's sales tactic.

So, coming back to the question in the title, I would say it is not necessarily relevant whether a drug works in a simple or complex way. It is relevant whether it works empirically, irrespective of the complexity of its pharmacologic effects.