Thursday, August 18, 2022

How Minds Change by David McRaney: a book review and discussion

David McRaney, in his new book called "How Minds Change" (2022), reviews our understanding of why people can form tenacious beliefs which are resistant to change, leading to political polarization, conspiracy theorists, hate groups, cults, anti-vax groups, climate change denialism, etc.  

I have discussed a lot of this material in some of my previous posts.   A big focus in McRaney's book is on what strategies are most effective to help with these problems.  He shows that simply presenting facts to a person with entrenched beliefs is usually ineffective, and could even cause the person to become even more entrenched in their beliefs.  Instead, there are several techniques discussed which have much better success.  These techniques are to some degree common-sensical, and are foundations of what might be found in any compassionate interaction, or any psychotherapy scenario.  

He discusses several such strategies, including deep canvasing, the elaboration likelihood model (ELM), street epistemology, and motivational interviewing.  All of these are similar--I'll summarize the core features here: 

1) establish rapport.  Empathize.  The communicator must seem trustworthy, credible, respectful, and reliable.   Obtain consent to talk about the issues at hand.  

2) Ask how strongly the person feels about a particular issue;  repeat back and clarify; identify a confidence level, such as from 0 to 10; ask how they chose that number; ask how they've judged the quality of their reasons for their choice;  summarize; make sure you've done a good job summarizing correctly.  

3) If there are core values influencing the person's opinion, such as about the importance of family, community, safety for children, freedom, loyalty, etc. be sure to empathize, acknowledge, and affirm these.  If there are core values in common, be sure to emphasize the commonality.  

3) If their confidence level was not at an extreme (0 or 10), ask why not?  

4) Ask if there was a time in their life before they felt this way about the issue, and if so what led to the change?  

5) share a story about someone affected by the issue.

6) Share a personal story about why and how you reached your own position, but do not argue.  

7) ask for their rating again, then wrap up and wish the person well, possibly with an invitation to talk again.  

Notably, these techniques do not involve arguing about facts, such as about scientific data.  A person holding strong entrenched beliefs may consider contrary facts or data to be false, biased, or irrelevant.  They may feel like they are betraying their ingroup or their sacred values if they were to change their position.   Yet elsewhere in the book there is an emphasis on facts as well, it is just that there would need to be a tipping point of information frequency within the person's ingroup, beyond which the group opinion starts to change suddenly.  Below that level, facts are easily dismissed, ignored, or even used to ironically consolidate their previous beliefs, while labeling the fact-provider as a misguided or even evil outsider.  

In some of McRaney's examples, he shows, as I have discussed before, that strong ingroups can be the main factors causing resistance to rational changes in belief, even if the ingroup's beliefs are causing great harm to themselves and are contrary to their core values (the anti-vax movement is an example).  He points out that sometimes people need to leave these ingroups for other reasons, before they become amenable to changing their beliefs.  Exiting the ingroup sometimes needs to happen first.  But this can be unlikely to happen.  To facilitate ingroup members being able to leave, there would need to be a kind, respectful, compassionate approach.  If we only show anger and hostility to these ingroups, the members are more likely to rally together, as if protecting themselves from an enemy attack.  

McRaney alludes to many of the practioners of techniques such as deep canvasing having many video examples of the technique, to help others learn and offer constructive feedback about the technique.  I think this would be something to check out online, to see examples of people working in this area lead a successful conversation leading to positive change.  Otherwise, like so many other techniques in health care or in life, we are stuck with just reading about an idea, rather than practicing and learning "hands on" with the guidance and feedback of others.  

The one critique I have of McRaney's book is that he leaves out discussion of many research leaders in the psychology of conspiracy theorists, cults, and persuasion.  Cialdini's work from decades ago is never mentioned.  Psychologists such as Sander Van Der Linden are not mentioned.  There are some other techniques suggested by these other researchers, including a "fake news inoculation" technique, in which you can learn and practice ways to protect yourself from misleading information.  See the website https://www.getbadnews.com/books/english/

Also, the book does not discuss individual variations in people as a factor affecting tenacity of belief, propensity to conspiracy beliefs, resistance to fact-based arguments, etc.  In my previous post (https://garthkroeker.blogspot.com/2021/09/conspiracy-theories-vaccine-hesitancy.html?m=1) I discuss factors such as past trauma and personality disorders as factors which could cause an individual to hold more rigid harmful or false beliefs.  There would need to be some varability in the approach to conversing with someone about these issues, given these individual variations.  It may be valuable to focus persuasive efforts on those most ambivalent or amenable to change within a strong ingroup.


 

Sunday, July 31, 2022

Medical School Admission Criteria: a discussion

 It is very difficult to get admitted to a medical school.   At UBC, only about 10% of applicants are accepted.   

This leads to extreme competition.  Students admitted to the program  have average university grades just under 90%, and average standardized test scores (from the MCAT) just under the 90th percentile.  

Even if you have average university grades above 90%, it is no guarantee of admission.  Only 26% of applicants with such high grades are accepted.  

Therefore, there are other factors which increase the likelihood of admission, aside from grades and standardized test performance.  These are so-called "extracurriculars" such as history of volunteering and "leadership activities," reference letters, and performance in a "multiple mini-interview," which involves responding in a desired fashion, within a time limit, to various hypothetical scenarios with a sequence of 10 different interviewers.  

I understand the need to have multiple criteria to judge applicants.  But I would like to make the case here that the current selection process is not particularly efficient or fair, it has a very strong bias against people with particular personality types, despite those people being very well-suited to be excellent physicians, and also leads to years of unhealthy, expensive, and wasteful frenzied competition before starting medical school.  There is also a bias in favour of people from wealthier families, since such people would more easily be able to afford years of volunteering, cultural exploration, club leadership, MCAT prep courses, tutoring, etc. instead of having to work long hours for years near minimum wage to support educational expenses.  

Imagine who the best future surgeons would be.  They would likely have excellent hand-eye coordination, tactile skills, mastery of fine details such as of anatomy, immense patience with meticulous tasks, and ability to remain calm and focused for long periods of time.  Some of them might be not be "neurotypical."   Many of the most talented such people would not necessarily have great social skills, would not be inclined to volunteer at Big Brothers or at nursing homes, would not be on the executive of university clubs, would not have a history of musical or drama performances, would not seem impressive in rapid interviews, and also may not have high grades in biochemistry or English.  I don't believe that any of the relevant surgical talents described above are assessed at all in the medical school admissions process.  

The current medical school admissions process therefore excludes many of the best future surgeons.  For many other future surgeons who are fairly accepted, they would have spent perhaps years of extra time padding their CVs with life activities that they were not really interested in, just to keep up with the pre-med competition game.  This is a waste not only for these individuals, but for society as a whole (we have budding surgeons who have several fewer years of professional life due to them having spent these years doing CV padding activities).  

The competition to show extracurricular volunteering and "leadership activities" also creates a bias against introverts.  Many of us are quiet, shy, with relatively solitary habits.  Such gentle, quiet people often would make excellent physicians: people who are calm, good listeners, patient, kind, intelligent, sensitive, and skilled.  But for a person with this personality style, group involvement, group leadership, and many types of volunteering, are just simply unpleasant or impossible.   I am an example of such a quiet, shy, relatively solitary person.  

We should have a selections process that chooses people who are likely to be competent, skilled, and stable.  We should have a process that makes it hard for a psychopathic person to get admitted.  The existing process does select for competence, stability and skill indirectly through grades, even though most of the actual grades have little to do with skills that would be of clinical use during a medical career.   A psychopathic person is less likely to have consistently high grades and a good volunteer record.  But many psychopaths could present themselves very well in cross-sectional interviews, while many non-psychopaths who are simply shy or reserved would bomb the interviews.  

What would be reasonable to change the process?  I don't think there's an easy answer.  I think that grades and MCAT should continue to have a prominent impact on admissions, despite some of the biases involved. Maybe this is unavoidable.   It would seem very reasonable and practical to be rewarded in the admissions hierarchy if you have proven experience or skill in health care work or in relevant skills; for example, people who have worked in nursing or other allied health fields, as a paramedic, in an anatomy lab, as a technician, doing other work requiring long hours of meticulous focus, veterinary work, or psychotherapeutic work.  I think that much less weight should go to performance in a cross-sectional interview process, since this is extremely prone to biases which are not relevant to future medical performance (this reminds me of Kahneman's descriptions of biased and meaningless selection interviews from "Thinking Fast and Slow").  I think that showing "leadership skills" should have minimal impact on admissions, and people should not be penalized in the process for not showing "leadership skills."   Furthermore, those who are most ambitious to show such "leadership" are often the worst leaders.  

Wednesday, May 25, 2022

"The Biology of Desire: Why Addiction is Not a Disease" by Marc Lewis

Marc Lewis explores the neurobiology of addiction in this short book, with proposed approaches to better understanding and helping people who are struggling with addictions.  

He comes across very clearly as a compassionate person, with a good understanding and personal experience in this area.  Probably someone who would be good to have as a therapist or support in the context of addictive problems.   

The book presents several case stories, which is always a compelling style in describing health care issues.  They could be a source of inspiration that could help people in their own journeys through addiction.  But of course testimonial accounts have only limited value in a scientific study, since they can introduce very strong biases in the reader, if not accompanied by references to large controlled studies.    

He has good reasons for disparaging what he calls "medicalization" of addiction, and emphasizing his opinion that addiction should not be considered a "disease."    Many of these reasons involve emphasis on what most of us would consider "bad medicine," i.e. institutional or even punitive treatment, simple remedies such as drug treatments given without addressing social or psychological issues, etc.    He particularly disparages psychiatrists, as though he thinks all psychiatrists enjoy the narrow or excessive brandishing of labels and dispensing of medications without attending to deep understanding, therapeutic compassion, and a biopsychosocial focus, with patients.  

So I found this part of his message to be tiresome.  Excessive narrow "medicalization" of almost any issue is not good medicine.  Almost any health condition, such as type II diabetes, heart disease, hypertension, and certainly conditions such as anxiety or depression, have spectrums of severity or chronicity; there are very important psychosocial factors, often present for years before the onset of the condition, that influence symptoms, severity, and progression.   There are feedback loops involving behaviour which cause spiralling exacerbations or rapidly accumulating harms in all of these conditions.  And treatments for diabetes or heart disease need to involve understanding and help with lifestyle, social, and economic factors affecting these conditions, with long-term goals in mind.    But it is not necessary to avoid calling diabetes a "disease."  Rather, the approach should be, in my opinion, to recognize that any disease state occurs on a continuum.  In many cases, there is no clear-cut line between disease or non-disease.   The word "disease" does not necessarily imply permanence, or need for invasive, narrow,  or institutional treatments.  For example, we could agree that viral pharyngitis is a disease, but is not one which normally requires medical intervention.  Just as in addiction, many conditions uncontroversially considered "diseases" or at least pathological states, such as pneumonia, COVID, migraine, sciatica secondary to disc prolapse, psychotic episodes, or brain injury, can often  recover on their own without any treatment at all; but for some sufferers of these conditions, the symptoms become relentlessly chronic or more difficult to deal with.    Just because something has the possibility of improving on its own, or through lifestyle improvements, after days, months, or years, does not mean that it shouldn't be considered a disease.  Furthermore, the improvements in many conditions can sometimes be associated with improved perspective or lifestyle, but sometimes the improvements are just random.  Many patients I've seen have engaged in all the healthy perspective-taking and good lifestyle habits you can imagine, but are still afflicted by the same tormenting symptoms.  Other patients somehow recover from severe problems without changing their lifestyles much at all.  

Hypertension is a disease, with multifactorial causes, which often requires medication but always requires attention to lifestyle factors.   Simple, overly reductionistic medical treatments can sometimes help with certain disease states (such as repairing a broken limb) but in many or most disease states, medical treatments are only one branch of helping.  The other branches require attention to lifestyle factors, community or social supports, and possibly an existential focus, to help people regain an awareness and passion for long-term goals.  But this multi-pronged focus is what I consider to be normal medical care.  

Lewis argues that because the neurobiology of addiction features entirely "normal" activations of normal brain pathways, akin to learning or falling in love, addiction therefore should not be considered a disease.  But many conditions in medicine feature activation of normal physiologic functions as a component of their pathology.  For example, inflammatory states resulting from infection (this is a major pathology in COVID) are activations of the body's defenses to fight off pathogens, but the inflammation itself ends up causing severe tissue destruction.  The processes are all "normal" but the circumstances of the disease state (germ + host) cause the reaction to be disastrous.  A clear understanding of disease states, mechanisms, and medical interventions to interrupt this cycle, are indicated to save lives and prevent widespread tissue destruction.  

Addictive states can lead to similar destruction of bodies, minds, relationships, and careers.  Just because the mechanisms involve activations of normal neural pathways does not mean we should avoid diagnostic language.    Problems associated with pathologizing labels, such as stigma (from others or from self) do not mean we have to avoid such labels entirely, but it may mean that the labels should be used with care and humility, rather than in a pejorative manner.  

There is interesting neuroscience describing addictive processes, but sometimes discussion of this can devolve into making overly strong literal claims (e.g. about neuroplasticity), often based on compelling testimonial accounts, without as much robust statistical evidence to back these up.  This is a pitfall I've seen with other authors touching on this, such as Doidge.  The use of the neuroscientific language then becomes a tool of persuasion, which sounds impressive to most people.  But it is much more important in this area to back up claims, especially those based on case studies or testimonial accounts, with careful reference to large controlled studies.   

Lewis has good ideas and a passion for his subject, but his focus on addiction not being a "disease" is needless--it is to some degree a semantic squabble, which subtracts needlessly from the impact of his book.  


Monday, May 23, 2022

The Elephant in the Brain & The Folly of Fools

 Two more books to recommend:  


The Folly of Fools (2011) by Robert Trivers and The Elephant in the Brain (2018) by Kevin Simler & Robin Hanson are both about the human tendency to engage in deception: not only the deliberate deception of others but the deception of self.  

Trivers approaches this issue from the point of view of genetics (he was the first to characterize the evolutionary biology of reciprocal altruism).  The capacity to deceive can be beneficial to survival, as we see in many species of animals, and in many human examples.   But such deception can only work up to a certain point, an equilibrium point in terms of frequency, otherwise the strategy fails.  If deception was too frequent, the evolved strategies to counter deception would render the deceptive strategy ineffective.  Similarly, cheating can be an evolved strategy, but if cheating occurs too frequently in a population, it would no longer be effective due to widespread awareness and countermeasures in the population.    

Trivers goes on to argue that self-deception is a type of advanced deceptive strategy.  The capacity to effectively deceive others is enhanced if we can deceive ourselves.  If you REALLY believe you can win a fight (despite poor objective evidence), you are more likely to convince your opponent that you can win, and therefore are more likely to actually win, even if you utterly lack fighting skills. 

Unfortunately, self-deception leads to many serious problems in society.  Trivers goes through many examples, showing that horrible accidents, wars, biased research, and religious phenomena, are often driven by self-deceptive factors which end up causing disastrous results.  

His chapter called "religion and self-deception" is particularly recommended.  

While I consider this book important and highly recommended, I did find it often to be quite informal in reasoning, punctuated by forays into humour, but this could be a bit problematic when he is wandering into areas (for example about politics, wars, and religion) that many people could be quite sensitive or easily offended about.  There are bound to be sections in this book which could cause people some offense.  


The Elephant in the Brain is quite a remarkable review of ideas from social psychology and behavioural economics.  There is influence from Kahneman, Trivers (The Folly of Fools is referenced), Haidt, and many other leaders in the research of this area over the past decades.  I think it's astounding that these two authors, who are not specialists in these areas, produced such a comprehensive and compelling summary of this research.  

The thesis of this book is that humans have a powerful motive to signal membership in groups.  The tendency to form ingroups is a powerful human trait, evolved over millions of years.  Group membership allows us to trust and collaborate with our group members, for safety, defence, maintaining a food supply, dealing with illness, finding a mate, and raising children.  But unfortunately, this tendency to form ingroups can become such a powerful motivation, often without our awareness, that it overwhelms reason, fosters needless and often terrible conflict with outgroup members, and can become very destructive or at least inefficient.    And the phenomenon tends to perpetuate itself, since members of ingroups (be it political or religious or cultural) tend to socialize, mate, and have children with fellow ingroup members.  

They refer to Bryan Caplan's argument about education, showing that a great deal of education leads to only an indirect signal of skill or competence.  Most people do not use subject matter they learned in university very often if at all in the work they do afterwards.  Instead, the degree and grades serve mainly as a competitive signal to employers about capacity to achieve work, conform stably to demands, etc.  I have reviewed Caplan's book elsewhere (I do have some disagreements about this).  

The authors show that political and religious membership have powerful ingroup effects.  The tendency to form strong beliefs about elements of religious doctrine can be understood as a badge of group membership; if one can engage in successful self-deception about these doctrinal elements, it is all the more effective as a group membership badge.    The beliefs become shibboleths which can allow some feeling of trust with co-believers, and a sense of distrust or frank dislike of outsiders.  Such belief systems can develop independently of rational moral reasoning.  While all religious systems contain positive insights about morality (e.g. "love your neighbour as yourself", "blessed are the meek", "blessed are the peacemakers," "judge not lest ye be judged", "do unto others as you would have them do unto you," etc.), the moral prominence of these beautiful insights is often lost in a cloud of doctrine that becomes more about maintaining an emblem of group involvement, an "us" vs. "them" mentality.  This mentality is a manifestation of an evolved trait pushing all humans towards group involvement, formation of local communities in which we can feel trust and belonging, but with the unfortunate consequence of having outgroups which we would not trust, and which we would treat with less positivity, warmth, and generosity.  

The same phenomena occur in political beliefs.  While there could be core rational beliefs about positions on a political spectrum, with regard to preferred economic strategy, international affairs, management of public works, etc., a great deal of political involvement involves doctrinaire beliefs that are badges of group membership, and which have nothing to do with any understanding of policy.  Most people don't even know what the policy positions are, exactly, of the candidates they vote for.   Many others support their ingroup's politicians even though the associated policies would be harmful to themselves economically or socially.    We have tragically seen this happen during the pandemic.  Extreme beliefs about vaccines, masks, etc. became emblems of political group membership; many people made decisions about these issues not because of rational evidence (which strongly supported vaccine and mask use, for the protection of everyone's health, including the anti-vaxxers' own health and well-being), but because of the beliefs of fellow ingroup members in political or religious factions.  Masks and vaccines have almost nothing whatsoever to do with religion or politics -- they are simply common-sensical public health measures -- but once these issues became badges of group involvement, the issue spiralled into disaster, to the detriment of everyone.  This is an extreme example of the phenomenon shown in the famous children's study, where kids randomly given shirts of a different colour end up forming hostile ingroups, opposed to each other.  In the case of the pandemic, a great deal of anti-vax belief was simply driven by factors akin to having a different shirt colour, just to show difference from an opposing outgroup.  

In both books, reference is made to psychiatric theory as an example of self-deception.  Psychoanalytic theory is basically a set of ideas akin to religious doctrine, with a strong ingroup community of "believers" who couch discussion of psychiatric issues through the lens of a theoretical system which is mostly fictional.  As with religions, there are core beliefs in psychoanalysis which reflect deep insight and wisdom.   For example, the idea of psychological defenses came from psychoanalysis, and is ironically an insight into the tendency for humans to engage in self-deception, with the implication that we should try to become aware of our defences, and to be able to set them aside.   Similar insights warning about self-deception can be found in religious texts.  But most of psychoanalytic theory is arbitrary, based on bizarre inferences made from case reports, coloured by the already biased opinions of the therapists.   But as with religious practices, much of the therapeutic value in psychoanalysis has nothing to do with the literal belief system, it has to do with the practice itself.   Visiting a trusted minister or priest, who would most likely be kind, gentle, understanding, supportive, and wise, could be a wonderfully healthy practice, as could a meditative practice of daily prayer, or visiting a congregation of loving friends.   These healthy and possibly healing effects would occur regardless of the belief system held by the group.  Similarly, the practice of psychoanalysis (or psychodynamic therapy more generally) requires frequent visits with a wise, compassionate, gentle, kind therapist who probably has some useful feedback about life problems, and there would be a healing effect of simply having a stable therapeutic relationship over a long period of time, irrespective of the fictional theoretical belief system held, such as strict Freudianism.  

While we can empathize and even endorse the benefits of ingroup membership phenomena, I believe it behooves us to strive for improved rationality, to guide our knowledge and decisions so as to benefit ourselves, our neighbours, and the world in the most effective way.  Societies across the world have improved in this way over the centuries, as Steven Pinker has shown us (see Enlightenment Now), but we have a lot of work to do to continue progress in building a just, peaceful, prosperous society.  

In both books, we are wisely cautioned to look to ourselves for our own self-deceptions.  It is another human tendency to see self-deception or folly in others, while not noticing our own.  In my case, I recognize this will be a work in progress.   I surely have beliefs or practices that are products of my ingroup or other biases; I hope that I will be able to keep working on better awareness of these issues over time, in service to my patients and to myself.   



Monday, April 25, 2022

Review: Shrinking Violets: The Secret Life of Shyness, by Joe Moran

 Joe Moran's book is a nice exploration of various historical figures (such as authors, poets, and musicians) who had what he calls "shyness."  Moran alludes to his own shyness as well.  

A thematic goal of the book is to understand shyness as a part of the tapestry and variety of human life, as opposed to a pathology that requires treatment, or that is even treatable at all.  

Moran is a good writer--he's an English professor, and it is always a delight to read a book in this type of genre written by someone with a mastery of the language.  

This book is interesting as a historical or biographical journey, but I found it quite limited as a serious study of shyness from a psychiatric point of view.  First of all, "shyness" is a very limited term to describe the many varieties of anxiety, introversion, personality styles, and autistic traits likely present in some of his case studies.  

Near the end of the book, Moran encourages a position of gentle acceptance of shyness, but this acceptance seems to disparage the potential value of attempting to help people manage or change their social anxiety or avoidance using therapeutic techniques.  One chapter is even called "The War Against Shyness," which is a pretty strong condemnation of the therapeutic culture.    

There are many shy people, who have what might be considered social anxiety or autistic traits, who might find therapy helpful, to improve social skills, to find ways of facing fears more comfortably, or even to reduce anxiety a notch (including with the help of medication).   We should always have modest or limited expectations of therapy; also we need to take care to affirm an accepting rather than a pathologizing stance, particularly when social behaviour and experience always exists on a spectrum.  Yet the best of modern therapy is affirming and accepting; it just helps people to suffer a little bit less, to help people have a little bit more freedom in their lives to do things they might find meaningful, enjoyable, or essential for survival or prosperity.