Thursday, February 26, 2026

The Psychology of Religion, Chapter 17: Shepherding

A related religious metaphor is shepherding. Jesus is called the “Good Shepherd,” and there are many other biblical passages that liken God to a shepherd. It is a beautiful image, and as a child I absorbed it in exactly that spirit: kindly pastoral artwork, a gentle man with a hooked staff, sunny hills, a flock of woolly friends, perhaps one little sheep who has wandered off and needs to be carried back to safety.

But it is worth pausing to remember what shepherding actually meant in that time and place. Sheep were not kept as pets. They were livestock: valued for wool and milk, yes, but also raised for meat—and sometimes for sacrifice. Sacrifice would involve securing the animal using iron rings in front of an altar, cutting the animal's throat, collecting its blood in a special container, the blood then splashed against the altar; next, the animal would be hung from a hook, skinned, then various organs would be removed and burned.


A shepherd’s role was not only protection and guidance; it also involved ownership, control, and (eventually) decisions about which animals would be killed, sacrificed, or eaten. In that light, “being shepherded” contains an unsettling double meaning: you are kept from straying, guarded from wolves, and held within the safety of the flock—but you are also being managed toward ends that are not your own.

And if we push the image just one step closer to lived reality, it gets darker in a way the children’s illustrations never hinted at. Imagine being a sheep in the flock: every so often the younger males—your cousins, in a sense—are taken away. Perhaps they are led toward a little shed at the edge of the field, or down a path behind a stand of trees, and they are simply never seen again. The flock goes on grazing. The shepherd is still “protecting” the flock. But the protection is inseparable from a system in which some members are quietly designated for disappearance.

To be fair, the Christian image in particular tries to invert the usual arrangement: the “Good Shepherd” is portrayed as laying down his life for the sheep. That is morally striking. Still, the metaphor does something psychologically and socially important: it trains us to admire a certain kind of relationship—one in which docility is a virtue, “straying” is a moral failure, and the authority to define what counts as straying belongs to the shepherd.

The phrase “sheep gone astray” appears repeatedly in scripture, usually as a metaphor for human misbehavior. But actual sheep that never “go astray” do not graduate into freedom; they remain in the flock under management. As a child I never thought of this. Now I think the metaphor is revealing, because it quietly captures a profoundly unsettling moral posture: the idealization of a passive, domesticated existence where total subjugation is rebranded as pastoral care, and where the ultimate reward for perfect obedience is to remain in a community where you and your peers are quietly led to a brutal end, dictated entirely by the whims of the shepherd.


Wednesday, February 25, 2026

The Psychology of Religion, Chapter 16: Sacrifice

Most religions have some form of sacrifice alluded to in their theology. Sometimes this involves literal offerings—killing and burning animals, or destroying valuable objects. Other times it is “bloodless”: giving money, time, obedience, or the renunciation of pleasures through fasting, abstinence, or celibacy. In all these cases, the underlying idea is similar: something costly is offered up, with the hope of securing meaning, favor, purity, forgiveness, protection, or communal belonging.


There are also sacrificial motifs that move disturbingly close to human sacrifice. In the Abrahamic traditions, for example, the willingness of Abraham/Ibrahim to sacrifice his son is presented as a peak test of obedience—and in Islam it is commemorated annually in Eid al-Adha, the “Festival of Sacrifice,” in which animal sacrifice functions as a memorial of that story. And in Christianity, the theme of sacrifice is carried into the central story of Jesus: a dramatic moral and symbolic reframing of sacrifice into self-sacrifice, offered “for others.”


Nor is sacrifice some oddity of the Abrahamic traditions. Across much of the ancient world, sacrificial traditions were common, and they were often brutal. Ancient Greek religion had animal sacrifice. Vedic religion in India revolved around yajña, sacrificial ritual. Ancient China too had elaborate sacrificial practices directed toward ancestors and higher powers, sometimes involving animals and at times human beings. The Aztecs are especially notorious for human sacrifice. And the roots of all this may go back shockingly far. A 2019 archaeological paper on symbolic destruction says that “the earliest evidence, dated to about 26,000 BP,” comes from Dolní Věstonice, in the form of making and then “exploding” clay figurines. If that interpretation is right, proto-sacrificial or ritually destructive behaviour belongs among the earliest traces of symbolic culture that we have. 


Why would an all-powerful deity, especially one associated with the highest standards of morality,  want a dead animal or a burnt work of art as a gift? One might think that a god worth revering would consider it a gift if you were to help other people, or care for the natural world, rather than to destroy objects or kill things.  But sacrificial systems do not usually work that way.

Reciprocity, Magical Thinking, and Social Technology


Sacrifice is, in my view, an extension of ordinary human ideas about reciprocity and gratitude—infused with magical thinking. In a community we do favors, give gifts, and care for one another. These behaviors can be altruistic, but they are also supported by norms of reciprocity. If one believes that a mystical power controls destiny, fertility, weather, health, wealth, or military success, it becomes psychologically “reasonable,” within that worldview, to give that power a gift—hoping for a return.


And once a person enters this mindset, the logic can become self-sealing. If you make sacrifices and misfortune still comes, you can conclude the offering wasn’t sufficient, wasn’t sincere enough, or wasn’t given with the right purity of heart—so you must increase it next time. If something good happens afterward, it feels like proof that the sacrifice worked, and should be repeated. In this way, practicing sacrifice can become an escalating, brutal, and destructive behaviour. The sacrificed animals—often the most vulnerable and least able to “consent” to the human story being told about them—do not get much say in the matter.


Another motivation for sacrificial rituals likely came from the brutal necessities of ancient life: hunting animals, or killing domestic animals for food. Most humans bond to animals easily, and it would be psychologically troubling to watch an animal struggle and suffer. Ritual can function as moral anesthetic: a way to consecrate violence, to assuage guilt, and to turn a grim necessity into a story of gratitude, order, and meaning.


Sacrifice can also be political performance. Public ritual can consolidate hierarchy, especially priestly hierarchy, display power, intensify fear, and signal unity. It is not hard to see how sacrifice functions as a kind of social technology: it makes shared belief visible and costly. It puts loyalty on display. It shows who is serious, who is obedient, who can be trusted, and who has the authority to declare what counts as holy.


This is also where sacrifice connects to group psychology. Some scholars have argued that costly rituals—things you would not do unless you were committed—operate as signals that strengthen trust and cooperation within a group, partly by filtering out free riders. A community bound together by shared sacrifice can feel safer, warmer, and more morally serious to its members. But that same mechanism can harden boundaries and intensify suspicion of outsiders.


And costly sacrifice does not merely send a signal to other people; it also works on the person making the sacrifice. People are generally reluctant to admit that they have suffered for nothing. So the greater the sacrifice, the stronger the pressure to reinterpret the suffering as meaningful, noble, or necessary. That helps make sacrificial systems self-protective and self-reinforcing. The cost itself becomes part of the “evidence” that the belief must matter.

Kin Altruism


Speaking of reciprocity: it is a strongly selected trait to favor and help genetic relatives, sometimes even in self-sacrificial ways. If there is a person who has a trait that causes them to selectively help close relatives, then that trait will tend to persist in the family line, because close relatives are more likely to carry the same genes that helped produce that tendency in the first place. This is a simple evolutionary logic: kin altruism increases the survival and reproductive success of the shared family “pool,” even when it costs the individual something in the short run.


But humans do not walk around calculating degrees of genetic relatedness. Instead, we rely on crude, fast estimates—cues that, over most of human history, were often correlated with kinship and shared ancestry. People who live near each other, marry each other, and raise children together will, over generations, tend to share not only genes but also language, accent, customs, dress, habits, and social norms. They may also tend, on average, to resemble one another physically more than they resemble people from a distant village, tribe, or lineage. Conversely, people who look different, speak differently, or practice very different customs are often from a different village, tribe, or family network—and therefore are somewhat less likely to be as closely genetically related as the people who share your immediate cultural and familial world. 


Similarity of appearance, familiarity of accent, shared habits, shared rituals, shared dress, and shared taboos can all become proxies—very imperfect proxies—for “one of us.”  Religion gives people common dress, common restrictions, common foods, common sacrifices, common songs, common stories, and common enemies. In other words, it manufactures the feeling of kinship, even among people who are not literally kin.


The mind has evolved to be slightly more generous, trusting, and self-sacrificing toward those who are more likely to be “one of us,” so it follows that it may also be less generous, more suspicious, or more emotionally distant toward those who feel like “not us.” These tendencies are not destiny, and they are not moral justification—but they are part of the psychological and evolutionary foundation of prejudice. These are precisely the sorts of inherited inclinations we must learn to recognize, challenge, and actively override.


Belonging and Group Boundaries


Religion can sometimes widen the circle of felt family. But it can also strengthen the distinction between those inside the group and those outside it. Once sacrifice, loyalty, and group identity are fused together, shared customs can take on unusual emotional and moral weight, and group boundaries can begin to feel especially important. The stronger those boundaries become, the easier it is for outsiders to be viewed with suspicion, distance, or moral distrust. This does not mean religion always produces hostility, or that it does so uniquely. These are broader features of human social psychology. But religion can give them a sacred language, a ritual structure, and a greater sense of seriousness. In that way, stronger religious boundaries can contribute to increased exclusion and, in some cases, increased hostility between groups. Religion does not invent this psychology, but it can reinforce it.




The Psychology of Religion, Chapter 15: Spirituality & Superstition

Humans have cognitive tendencies that make superstitious beliefs easy to generate—and hard to extinguish. By “spirituality” here I do not mean awe, contemplation, or reverence in a broad sense; I mean the more specific belief that hidden forces—fate, synchronicity, spirits, or nonphysical “energy”—are actively guiding events. Beliefs in spirits, ghosts, magic, luck, or fate guided by mysterious forces are widespread across cultures. The specifics vary wildly from place to place—local spirits, protective rituals, sacred objects, invisible dangers—but the underlying psychological grammar is familiar.

Meaning, Pattern, and Agency

A core ingredient is pattern-seeking. The mind craves meaning, and when the world is uncertain or painful it will often manufacture meaning rather than tolerate ambiguity. This is not a sign of low intelligence; it is ordinary cognition under stress. Pattern-seeking is only part of the story: humans also readily detect agency, intuit purpose, and imagine hidden minds or forces operating behind ambiguous events. When people feel a loss of control, they become more likely to perceive patterns—even illusory ones—in the environment, and to treat coincidence as signal. Superstition can be emotionally satisfying precisely because it converts randomness into a story.

Stories, dreams, unusual experiences, and compelling anecdotes can then become socially transmissible. Once a few people begin to interpret events through a “hidden forces” framework, the framework spreads: it gives language to fear and hope, it creates a sense of specialness, and it offers the pleasure of explanatory closure. Coincidences become “signs.” Ambiguous perceptions become “messages.” A confusing life becomes a legible plot.

Beliefs about fate, synchronicity, or “good and bad energy” fit neatly into this same psychology. A person has a strong feeling—dread, relief, attraction, foreboding—and the mind is tempted to treat that feeling as information about the outer world. A difficult decision can then feel as though it has been answered by “the universe.” A coincidence becomes destiny. A run of bad luck starts to feel orchestrated. The step from “this feels meaningful” to “this is objectively meaningful” is, for many people, quite small. In cultural settings where unusual feelings are already given a supernatural vocabulary, it becomes even easier for an ordinary human experience to be interpreted as fate, guidance, or invisible force.

Why It Can Feel Helpful

Sometimes these beliefs can even confer a short-term psychological benefit. A ritual, talisman, or conviction that one has “good energy” behind them can reduce anxiety, increase confidence, and make a person feel more ready to act. In that sense, superstition can work a little like prayer, placebo, or a pre-performance routine: it changes the person’s emotional state, and that changed emotional state can sometimes improve performance or endurance. But this does not validate the supernatural explanation. It shows that belief can alter mood, attention, and confidence—not that a mystical force is operating in the background.

When Meaning Hardens into Causation

The trouble begins when a poetic or emotionally satisfying interpretation hardens into a literal theory about reality. At that point there is no longer only a harmless sense of wonder; there is a false model of causation. There is still no robust, independently replicated body of evidence that psychic forces, spirits, or nonphysical “energies” of this sort are objectively guiding events in the way believers often suppose. And there is no good reason to treat a strong feeling of destiny as evidence that destiny is real. Once such beliefs are treated as evidence, judgment begins to drift away from probability, base rate, character, and practical consequence. A person may stay in a bad relationship because it feels “meant to be.” They may avoid a sound medical treatment because the illness is thought to be spiritual. They may take reckless risks because fate is presumed to be protective. Life planning becomes poorer when omens and vibes displace sober thinking about what is actually happening.

When Superstition Turns Social

There is a darker social risk as well. Once people begin to believe that invisible moral or spiritual contamination clings to persons, places, or groups, superstition can become a license for prejudice. History offers grim examples of what happens when communities weaponize these causal illusions. The early modern European witch crazes, which claimed tens of thousands of lives, were driven in large part by the urge to assign occult causality to illness, infant mortality, crop failure, or social misfortune. Medieval blood-libel accusations and later pogroms during epidemics drew on related fantasies of hidden contamination and malevolent agency. A more contemporary example can be seen in the persecution of people with albinism in parts of Sub-Saharan Africa, where witchcraft beliefs and ritual myths still endanger lives. In modern, everyday life, the seeds of this same pathology are more banal but equally insidious: a neighbour is said to have “dark energy.” A house is called cursed. A child is treated as spiritually tainted. A stranger is felt to be threatening in some occult way rather than simply unfamiliar. Once a group shares such assumptions openly, they no longer remain private quirks of interpretation; they coalesce into a moral atmosphere in which exclusion, suspicion, and even physical violence feel justified. This is how irrational belief can slide from the sanctuary of private comfort into the arena of public harm.

A Humane Response

At the same time, this topic calls for sensitivity. For the person immersed in such beliefs, the experience does not feel frivolous. It may feel visceral, self-evident, and woven into memory from early life. It may have been reinforced for years by trusted friends, family, charismatic figures, selected anecdotes, online communities, and a steady diet of “paranormal” documentaries or videos that showcase apparent hits while ignoring the endless misses. When a belief has been stabilized by familiarity, repetition, and community endorsement, challenging it can feel less like an intellectual correction than like an invalidation of lived experience. The humane response is not to mock the feeling. The feeling is real. What deserves challenge is the conclusion drawn from it.

A Psychiatric View

From a psychiatric point of view, there is also genuine individual variation in proneness to unusual, mystical, or numinous experience. None of this means the experience is itself pathology. Some people reliably feel awe, presence, synchronicity, and “spiritual certainty,” while others rarely do. Some people seem to have a more absorptive mind: more prone to inner vividness and felt significance. This is shaped by personality and temperament, by culture and reinforcement, and by biology. One useful but imperfect metaphor is that some minds run with higher “gain”: experience arrives vivid and compelling, but with a greater risk that noise is interpreted as signal. Salience systems in the brain are part of this story, though the biology is not reducible to dopamine alone. A related literature suggests that paranormal belief is associated, on average, with more intuitive thinking styles and some weaknesses in probabilistic thinking and analytic reasoning, though of course none of this maps neatly onto any one individual person.

Spirituality and Religion

Many members of organized religions disparage “superstition” or free-floating “spirituality.” Yet in psychological terms—at the level of cognitive ingredients—the differences are often of degree rather than kind. Organized religions tend to formalize these human tendencies into institutions: they standardize the stories, professionalize the interpreters, and link belief to group identity and obligation. “Spirituality,” in contrast, often keeps the intuitions while loosening the institutional grip. But both draw on the same human appetite for meaning, comfort, narrative, and relief from uncertainty.

Next Chapter

The Psychology of Religion, Chapter 14: Religion as a Business

Many religions and other spiritual practices operate partly like a business. There is marketing (proselytization, outreach), branding (symbols people wear on clothing or on necklaces), encouragement to be loyal to your brand, and criticism of other brands. But then there is also a financial commitment, leading to an organized financial structure. There is work to be done by members of this structure, with an ultimate goal—explicit or implicit—of retaining and expanding membership, eliciting volunteering efforts and financial contributions, and maintaining morale.

With some intensely tribal, high-commitment groups (fraternities are the obvious benign example, gangs the darker one), there can be an onerous initiation ritual. Social psychologists have shown that when people have to work hard, endure discomfort, or pay a steep price to join, they often become more loyal afterward—partly because the mind naturally tries to justify what it has sacrificed. Religions also commonly have initiation processes: potential members may be vetted, attend educational sessions, and then take part in some public ritual in which solemn commitments are made.

Sometimes, as with luxury business models, broad proselytization does not occur; instead, the “product” is restricted. Only a select few gain entry. In some traditions you need advanced membership—often taking years—before you are allowed to enter certain beautiful buildings such as temples, or partake in certain deep rituals. Sometimes only men are allowed into certain leadership roles or ritual spaces. These obstacles increase the allure and tend to attract people willing to contribute more commitment, time, and money. If everybody had a Rolex watch or a Gucci bag, it would cease to be as special; exclusivity is part of what makes the object feel “high-end.”

One particular feature of religion that resembles a corporate tactic is the elevation of belief alone—faith—as a key virtue. Belief without evidence is not merely tolerated; it is often praised. If a corporation could successfully propagate that idea, it would be extremely useful for marketing, since people would form loyalty to the brand without looking too closely at “reviews.” Doubt could be reframed as weakness, betrayal, or impurity. Meanwhile, “true believers” are rewarded: their status, trust, and esteem in the community rises in proportion to their loyalty.

In many cases religious institutions amass vast wealth: in property, buildings, and investments. In at least some prominent modern examples, credible reporting and public filings have described religious investment holdings on the order of tens of billions of dollars, with wider claims in some cases exceeding $100 billion—figures that are difficult to reconcile with the ordinary believer’s image of humble spiritual stewardship. And these structures often operate with significant tax advantages. In the United States, churches are generally treated as tax-exempt. In Canada, registered charities (including many religious organizations) are exempt from paying income tax while registered. 

And yet, some of the most insightful cautions about wealth come from within religion itself. One of the sharpest is the line attributed to Jesus (present in all three Synoptic Gospels): “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”

The Psychology of Religion, Chapter 13: to be a scholar, you had to study theology

For long stretches of European history, many able people who wanted to study the biggest questions ended up studying religion. That was not always because religion had the best answers. Often it was because religious institutions controlled much of the schooling, the books, and the road to public influence. If you wanted literacy, training, status, or a respected voice in your community, religion was often the main gate you had to pass through.

Before the printing press, and before secular universities became common, the Church often controlled many of the material conditions of scholarship as well: the copying of books, access to manuscripts, and much of the economic support that allowed a person to read, write, and think rather than spend life in manual labor. In practice, that often meant that to be a scholar was to be funded, housed, trained, or at least tolerated by a religious institution. That inevitably shaped what could be said, what could be explored, and how far a person could go.

This created a built-in bias. It was not just that highly capable people happened to like theology. It was that theology sat near the center of educated life. If you wanted mentors, libraries, credentials, or a place to teach and write, you often had to work inside a religious setting, or at least learn to speak its language. In that kind of system, religion could borrow prestige from the educated people who passed through it.

That distinction mattered. When people saw a brilliant, educated, generous person who was also a priest, minister, or theologian, they could easily draw the wrong lesson: if someone this thoughtful believes it, maybe the religion itself must be true. But that does not follow. A person can be wise, morally serious, and deeply useful to society while still believing things that are false. Learning and talent do not make a doctrine true.

There was another pressure as well. In many periods, if you were a serious thinker and openly challenged the religious system, you were not just risking an argument. You could lose your position, your audience, your safety, or even your life. So the historical record is not a fair contest in which every idea had the same chance to survive. People who stayed within accepted limits were more likely to keep teaching, keep publishing, and keep being remembered.

Galileo is a clear example. He used a telescope to study the sky and argued publicly that Earth moved around the sun. Today that sounds ordinary. In his time it crossed powerful religious limits. He was put on trial in 1633 and spent the rest of his life under house arrest. Whatever one thinks about all the details of the case, one lesson is plain: a scholar’s standing and safety could depend on not going too far beyond approved belief.

Giordano Bruno shows a somewhat different version of the same pattern. He put forward bold ideas about the universe and about religion itself, and authorities judged some of those ideas unacceptable. He was executed in Rome in 1600. The point is not to turn him into a simple hero of science. The point is that religious power helped set the boundaries of acceptable thought, and crossing those boundaries could be deadly.

And this pressure was not limited to astronomy. Reformers, translators, and other dissidents were also punished severely in different times and places. Even when the issue was not a new scientific discovery, the deeper conflict was often the same: who gets to define truth, and what happens to you if you say otherwise in public?

Europe was not unique in this broad pattern. In the Middle East and the wider Islamic world, serious learning often grew around mosques, religious schools, and scholars of law and scripture. In China, higher learning and public advancement often depended on mastery of the Confucian classics and success in the imperial examination system. In India too, advanced learning often grew around religious traditions, temple settings, monasteries, and learned priestly circles. The details differed from place to place, but the larger point remains: when one tradition controls the road to education and status, its ideas can start to look far more intellectually established than they really are.

To be fair, religious institutions also preserved and passed on learning in many eras. They copied books, trained students, and helped keep scholarship alive. Several well-known universities began in religious settings or still carry that identity. So the story is not simple. But that is exactly why the bias is easy to miss. When the same institution both protects learning and sets the limits of belief, educated people will naturally be overrepresented inside that system.

You can still see a milder version of this today. Many excellent private schools, colleges, and universities are sponsored by religious organizations and maintain very high academic standards. Some of them also require students to take courses in religion, attend chapel, or absorb a broader moral and intellectual outlook shaped by a faith tradition. That does not mean these schools are weak academically; some are excellent. But it does mean that strong education and religious commitment can be packaged together in a way that makes the religion seem more intellectually confirmed than it really is.

There is a second modern echo too. In some religious schools, seminaries, tightly bound communities, and other closed systems, career paths and social acceptance can still depend on affirming the right beliefs. The penalties are usually softer now—not execution, but loss of job, loss of status, loss of community, loss of belonging. The basic pattern is similar: a belief system becomes part of the ticket into a valued world. And once again, this can make it look as though the best minds support the belief, when some critics have left, have been forced to stay quiet, or have been pushed out.

So the old link between theology and scholarship was often not straightforward proof that theology was true. Much of it grew out of history, institutional control, and control over the means of education. For long stretches of time, if you wanted a serious education, you often had to study religion; and if you wanted to keep your place in public intellectual life, you often had to stay inside religion’s limits. That alone can make religion look intellectually stronger than the evidence for its literal claims really is.


Next Chapter

The Psychology of Religion, Chapter 12: Benefits of Unfounded Belief

Another troubling angle on all of this is that people can sometimes latch onto a belief system whose claims are unfounded or causally misconceived, and yet experience real improvements they had not found otherwise. Those improvements can entrench the belief further, because the person now has what feels like lived proof: It worked for me. The mechanism is similar to the self-deception dynamic Trivers describes, and also similar to the way psychoanalysis could help some people even when much of its causal theory was exaggerated, overstated, or wrong.

For example, some people latch onto an extremely rigorous diet with a spurious rationale, and yet end up stabilizing a prior eating problem, bingeing pattern, or weight problem. Often what makes the diet “work” is not the theory, but the frame: the diet becomes a totalizing structure, a rule-set, a ritual, a commitment device, sometimes even a moral identity. Strong belief in the diet’s narrative can increase adherence—sometimes dramatically—especially when the belief is reinforced by “spiritual” practices, authoritative texts, a charismatic leader, and enthusiastic support from fellow adherents. The resulting improvement may have little to do with the supposed mechanism (“toxins,” “energy,” “impurity,” or whatever the myth is) and much to do with ordinary behavioral ingredients: reduced ultra-processed food, fewer calories, more routine, more attention to quantities and timing, and stronger social accountability. In other words, the distinguishing features of the theory may be fictional, while the behavior change is real.

This is an extension of the nonspecific-factors argument, but with an important twist: people may become more loyal to the false theory because the surrounding practices genuinely helped them. And expectancy itself matters. In psychotherapy, patients’ early outcome expectations are associated with better outcomes, and placebo research shows that ritual and a warm practitioner relationship can produce real symptom change. In some settings, even open-label placebos—interventions people are told are inert—can still help. That matters here because it shows that the human benefit does not necessarily require the theory to be true. In some cases it does not even require deception.

It is tempting to treat these forays into unfounded belief as harmless whenever they produce visible gains. But there is a dark side. Some dietary regimens are medically dangerous; some aggravate eating disorders; and some cultivate a loyalty to the framework that discourages critical thinking. Social media can intensify the problem, especially when health-focused communities reward purity, rigidity, and bodily control.

When a person’s identity becomes fused with a belief system, they may reject better treatments when those treatments are indicated—especially if a setback is interpreted as evidence of insufficient “faith,” insufficient purity, or insufficient devotion. The harms here are not merely theoretical. In patients with curable cancers, complementary medicine use has been associated with greater refusal of conventional treatment and worse survival, with the excess mortality appearing to be mediated by delay or refusal of effective care. 

These frameworks also often come packaged with community. People who join one cluster of unusual health beliefs can sometimes be pulled, by social gravity, into neighboring clusters: new spiritual doctrines, anti-vaccine attitudes, conspiratorial styles of explanation, and monetized ecosystems of coaching, supplements, retreats, and memberships. The pattern is not inevitable, but it is real enough to take seriously. 

And there is another distortion worth naming: we mostly hear from the success stories. The people for whom the diet failed, harmed them, or simply became an expensive obsession rarely become public evangelists. The community’s narrative therefore becomes skewed toward “miracles,” while the quiet attrition and collateral damage remain largely invisible.

Finally, just as in religions, the next step is often proselytizing. People who believe they have found salvation—whether dietary, medical, or spiritual—tend to recruit. They may pressure friends and family to “convert,” and disparage outsiders as ignorant, impure, or closed-minded. In the context of fad diets and alternative medicine, that can do real harm to public health.

So the point is not that unfounded belief never “helps.” The point is that when it helps, it often does so through common human mechanisms—structure, community, meaning, identity, expectancy, and accountability—while smuggling in risks that are easy to deny and hard to reverse once the belief becomes an emblem of belonging. The deeper problem is not merely that the belief is false. It is that the felt benefit is then misread as proof that the belief was true all along.

The Psychology of Religion, Chapter 11: Evolution

Evolution deserves space here because it explains the origins and diversification of life in a way that directly contradicts literal religious or mythological accounts of creation. It is certainly possible to remain religious while accepting evolution—many people do—but for some believers, evolutionary biology feels like an unacceptable affront to faith, because it replaces a story of intentional design with a story of natural processes unfolding over vast time.

There are experts in evolutionary science who can explain this far better than I can. Still, I want to set aside space for it in my own voice, because the basic logic is not hard to understand, the evidence is overwhelming, and the emotional resistance to it often has very little to do with evidence and a great deal to do with identity, belonging, and sacred narrative—topics I have already been discussing.

What follows is a tour through the core mechanism of natural selection, a few common misunderstandings, the idea of speciation, and then several side corridors that matter for this essay: cultural evolution, sexual selection, and the uncomfortable fact that even religiosity itself is shaped not only by culture, but also by temperament, inheritance, and biology.

Selection

Natural selection is the central guiding principle of evolutionary theory. The logic is profoundly simple. It requires only that we accept three basic facts:

1) Organisms vary (physically, physiologically, behaviorally).

2) Some variation is heritable (traits are influenced by DNA, even though environment matters enormously too).

3) Some traits affect reproductive success—not in a morally loaded sense of “deserving,” but in the literal sense that some variants leave more surviving offspring than others.

If a heritable trait increases the probability of leaving more surviving offspring in a particular environment, then over generations the population will contain more of that trait. If a trait reduces reproductive success, it tends to diminish. That's it--natural selection is differential survival and reproduction acting on inherited variation, repeated over time. 

You can see the basic logic everywhere, from selective breeding in crops and animals ("artificial selection"), to antibiotic resistance in microbes, to the obvious family resemblance in both physical and psychological traits among human relatives. None of this requires the belief that genes are the only cause of traits. It requires only the admission that heredity is a major contributor. A point worth emphasizing, because it is often misunderstood, is that natural selection is not about “improvement” in any moral or progressive sense. It is simply a filter that favors whatever works well enough in a local environment at a given time.

Mutation

DNA replication is highly accurate, but not perfect. Across generations there are small changes—mutations—introduced into genetic material. “Mutation” here does not mean “bad.” It simply means “change.” Most mutations are neutral, many are harmful, and a few are beneficial in a given environment. Sexual reproduction adds still more variation by shuffling existing variants into new combinations, so evolution does not work only with brand-new mutations but also with new mixes of old genetic material. 

Mutations do not happen because the organism needs them. A bacterium under stress does not somehow produce the exact helpful mutation it would most like to have. At the deepest level, whether a particular copying error happens in a particular cell at a particular moment is still a fundamentally accidental event. But that does not mean every part of the genome is equally likely to change. Some kinds of DNA changes happen more often than others, and some parts of the genome have a higher probability of experiencing mutations than others. Mutation is fundamentally random in origin, but statistically biased in pattern, and those biases affect what raw material natural selection gets to work with. 

When small genetic changes happen to produce a trait that improves survival or reproduction in a particular environment—say, a slightly different beak shape that lets a bird get food more efficiently—those variants will, on average, become more common. They become more common for a very simple reason: the creatures carrying the useful variant survive better and leave more offspring. Over many generations, the accumulation of such changes can produce substantial transformations, including changes in complex organs and behaviors. Darwin’s finches in the Galápagos remain a famous entry point into this idea because long-term field work and later genetic work showed how ecological pressures can shape beak traits across populations. 

The Time Scale of Evolution

One reason evolution feels counterintuitive is that large organisms reproduce slowly relative to a human life. Big evolutionary changes can take thousands or millions of years, just as major geological or astronomical processes do. We do not watch a canyon form in a single afternoon, and we do not watch a star age in real time, but the evidence for those processes is still decisive. Evolution is similar: long processes are inferred from converging lines of evidence. Fossils are one line of evidence—imperfect, but immensely powerful. Fossilization is rare and biased toward certain environments and tissues, so the record will always be incomplete. Still, the overall pattern—order in time, branching diversification, and transitional forms—fits evolutionary predictions very well. Comparative anatomy, fossils, biogeography, embryology, and genetics all point to the same branching story. 

Common Ancestry and Cousins

A cliché misunderstanding goes like this: “Evolution says humans descended from chimpanzees.” That is not what evolutionary biology claims. Humans and chimpanzees are not ancestor and descendant; they are cousins. And “cousin” is not just a metaphor here. It is literally the right genealogical idea: two living lineages sharing an older common ancestor. In ordinary family language, we talk about first cousins or second cousins. Chimpanzees are obviously not that kind of close cousin; if one insisted on stretching the family language into absurd distances, they would be something like our three-hundred-thousandth cousins. Current genetic evidence places the human–chimpanzee split on the order of about 6 million years ago. 

The same logic generalizes outward. Every living thing on Earth is, in the broad genealogical sense, our cousin. For many people, that is not depressing at all. It is a source of awe—a kind of cosmic kinship. If a reader wants a visual sense of this, it is worth looking up one of the modern phylogenetic tree charts online, such as OneZoom. The exact dates of particular branching points are still being refined, but the branching structure itself is already known very well. And the scale is staggering. Humans and mice last shared a common ancestor around 90 million years ago. Mammals and birds last shared a common ancestor on the order of 310 million years ago. Humans and fish share a common ancestor on the order of 450 million years ago. The exact numbers continue to be refined, but we are talking about tens to hundreds of millions of years, not a few thousand. (OneZoom)  (Tree of Life Explorer)

Speciation

Over the short term, evolution often looks like shifting trait frequencies within a species. Over the long term, divergence can accumulate until populations become reproductively isolated—meaning they can no longer interbreed successfully under natural conditions. That is speciation.

Speciation is not always a sharp on/off switch. In nature it often behaves more like a continuum. The Ensatina salamander example is a real case, and it is useful because it makes that idea vivid. Around California’s Central Valley, populations of these salamanders spread in a rough loop. Neighboring populations along the loop can still interbreed with the populations beside them. But by the time the two far ends of the loop meet again in Southern California, they have changed enough that they no longer interbreed successfully with each other, or do so only rarely. A simple analogy would be a circle of people, each speaking a dialect just a little different from the dialect of the person next to them. Each person can understand the neighbors on either side, but the person across the circle sounds incomprehensible. Ring species work something like that, except the differences are genetic and reproductive rather than linguistic. That is the point: species boundaries can emerge gradually, not by magic and not all at once.

One thing I want to state explicitly, because people often get confused here: even if a behavior or trait has evolved, that does not mean it is morally right, or that we should accept it. Evolution describes how traits spread. It does not tell us what we ought to value.

Compromise, Not Perfect Design

Evolution also does not produce perfect design. It modifies what already exists. That is why living bodies so often look less like clean engineering and more like a history of workable compromise. Humans are full of such compromises. In adult humans, the passage for food and the passage for air share anatomy in the throat, which is one reason choking is even possible. Our spines and backs also show the costs of walking upright. The human spine has an S-shape that helps us balance over our hips and walk efficiently on two legs, but it also turns a structure inherited from four-legged ancestors into a vertically loaded column. The result is chronic stress on the lower back, high rates of disc degeneration, herniated discs, sciatica, and persistent back pain. Human childbirth is unusually difficult because pelvic form, fetal growth, and the evolution of large brains create a tight compromise. It is not as though the body is simply poorly designed. The point is that evolution works with inherited materials and trade-offs, not with fresh blueprints. 

The vertebrate eye is another striking example. The retina is effectively wired backward: light has to pass through layers of neural tissue before it reaches the photoreceptors. Where the optic nerve exits the eye there is a literal blind spot, because that patch contains no photoreceptors. And because light must travel through the retinal layers first, the vertebrate eye needs compensatory tricks just to reduce scattering and preserve image quality. Müller cells help act like optical fibers to guide light through this awkward arrangement. The design also forces a trade-off between a clear optical path and the blood supply the retina needs. The system works, but it is hardly what a tidy engineer would draw from scratch. It also comes with structural vulnerabilities: the retina can detach, and when it does, vision can be permanently damaged. The octopus, whose camera eye evolved independently, ended up with a more direct setup. Its retina is everted rather than inverted, so the nerve fibers are routed behind the photoreceptors and there is no comparable blind spot. 

Our jaws and teeth tell a similar story. Over time, human faces and jaws have become smaller without tooth size shrinking in perfect proportion. The result is a mismatch between tooth size and available arch space. Softer and more processed diets appear to worsen the problem by reducing the chewing demands that help stimulate jaw growth. So many modern humans grow jaws that are simply too small for the full dental package they still carry, leading to orthodontic problems, crooked teeth, and impacted wisdom teeth. 

The broader point for me is simple: evolved traits are not always morally admirable or functionally ideal. Some have to be restrained, some have to be accommodated, and some have to be counterbalanced by culture. If we want to become more humane, we need rules, norms, education, medicine, and institutions that limit certain inherited tendencies, compensate for the problems they create, and cultivate better ones.

Cultural Evolution

A parallel kind of evolutionary logic shows up in culture. Dawkins coined the term “meme” for cultural units that replicate, but the broader point matters more than the label. Ideas, phrases, rituals, fashions, and institutions can spread, vary, compete, split, and disappear.

Language evolution is a very good example. Languages branch and drift. Over time, groups can become mutually unintelligible--they become different language "species." Linguists can reconstruct family trees and infer common ancestors in ways that are strikingly analogous to biology. Proto-Indo-European—the distant ancestor of languages as varied as English, German, Greek, Russian, Persian, and Hindi—existed about six to eight thousand years ago. English, German, and Dutch are much closer cousins inside the Germanic family, with a common ancestor a little over two thousand years ago. French, Spanish, Italian, Portuguese, and Romanian are cousin languages that began diverging from Latin roughly fifteen hundred to two thousand years ago. And the same logic extends beyond Europe. Mandarin and Cantonese are also cousins. Their shared recognizable ancestor lies in Middle Chinese, roughly a thousand years in the past. What begins as a dialect can, given enough time and separation, harden into a clearly distinct language. 

And language divergence does not require millennia only. Some changes become obvious in just the last few centuries. Afrikaans, for example, diverged from colonial Dutch over roughly the last three to four hundred years after Dutch settlement at the Cape in 1652. Even before a speech form is officially labeled a separate language, strong regional varieties can become difficult for outsiders to follow. There are strong accents within our own country that can be difficult to understand. Mutual understanding can erode gradually long before people agree on where to draw the boundary. 

Religions also behave this way. Doctrines split. Schisms occur. New denominations form. We see this clearly in Christianity: the East–West Schism of 1054 formalized the great split between Roman Catholic and Eastern Orthodox Christianity; the Reformation is conventionally dated to 1517; and those Protestant branches splintered further into Lutherans, Calvinists, Anabaptists, Anglicans, Methodists, Baptists, Pentecostals, and countless smaller groups. This fragmentation is not just ancient history. In the United Methodist Church, more than 7,600 U.S. congregations left between 2019 and the end of 2023, roughly a quarter of the denomination’s earlier U.S. total. Buddhism diversified too. The older matrix was early Buddhism. Theravada traces itself to one early line of that tradition, while Mahayana arose later as another major descendant branch around the beginning of the Common Era.  Islam experienced its defining Sunni–Shia split in the struggles over succession after Muhammad’s death. The family-tree metaphor is not perfect, but it is illuminating: religions do not descend from heaven as finished products. They branch, drift, quarrel, and split inside history.

Psychologically, that has an important implication: people often treat their own local, historically contingent version of a faith as if it were timeless and universal, when in reality it carries the fingerprints of geography, conflict, institutions, inheritance, and politics.

Sexual Selection

Another important evolutionary idea is sexual selection: traits can spread not because they help survival directly, but because they affect mating success. The peacock’s tail is the classic case—beautiful, costly, cumbersome, and yet selected because it becomes desirable within the mating preferences of the species. Darwin understood that biology is not only about staying alive long enough to reproduce; it is also about courtship, display, preference, and attraction. Richard Prum has argued, persuasively in my view, that sexual selection can include a genuinely aesthetic component: preferences themselves can become evolutionary forces, and traits can spread because they are found attractive, not merely because they advertise some practical advantage. 

This is relevant to humans not because we are peacocks, but because it reminds us that biology is not only grim survival calculus. Human mate choice also attends to looks, voice, movement, confidence, style, humor, conversation, and perhaps aspects of intelligence itself. Some theorists—notably Geoffrey Miller—have argued that traits such as humor, creativity, artistry, music, and parts of intelligence may have been shaped at least partly by sexual selection because they function as displays: they can signal mental agility, creativity, or the ability to hold another person’s attention. The evidence here is mixed and the details are debated, but the underlying idea is serious and worth considering.

Temperament, Inheritance, and Religion

Religiosity itself is not only cultural. Which particular religious group a person belongs to is largely a cultural and family-transmitted matter. But the broader tendency to be religious at all—to find religion important, compelling, consoling, or identity-defining—shows a meaningful inherited component. Twin research repeatedly suggests that adult religiosity has a moderate hereditary component, but the size depends a great deal on what exactly is being measured. Broadly speaking, estimates often land from the high 20s into the low 60s. More outward or socially enforced measures tend to sit lower, while more inward, identity-heavy, or conversion-like dimensions can sit higher. And when researchers say that some dimension of religiosity has a heritability of 60%,  they do not mean that 60% of one person’s religion is “caused by genes.” They mean that, in the population being studied, about 60% of the variability between people on that trait is statistically associated with genetic variability. 

It is also worth noting that intelligence and religiosity show a modest negative association on average in meta-analytic work. This is not a claim about any individual believer—many brilliant people are religious. The more useful point is simply that cognitive style varies. Some people are more comfortable with analytical doubt and ambiguity; others are more drawn to certainty, authority, and communal reinforcement. 

A trait dimension that is relevant here is schizotypy: a spectrum involving unusual perceptions, magical ideas, and strong pattern-finding. Most religious people are not schizotypal. But a person who is more prone to unusual experiences and to seeing hidden connections may be more likely to interpret inner events as messages, revelations, or signs coming from outside the self. Schizotypy itself also appears to be substantially heritable, often in roughly the 30% to 50% range. And there is repeated evidence suggesting that higher levels of schizotypic traits—especially the unusual-experiences side—are associated with stronger paranormal beliefs.

Moral psychology belongs here too. Some people are more temperamentally drawn to moral themes like loyalty, authority, and purity; others prioritize harm reduction and fairness more strongly. These inclinations are also significantly heritable.  Religions, especially organized and more traditional ones, tend to be associated with stronger emphasis on loyalty, authority, and purity, with less emphasis on harm reduction and fairness.  That is one reason some people feel deeply at home in religious cultures while others experience them as alienating. 

Conclusion

Once a person really absorbs the logic and evidence for evolution, it becomes difficult to look at literal creation myths in the same way again. For many people, that shift does not drain the world of meaning. It opens the door to a deeper, steadier awe: reverence for reality as it actually is, not as we once wished it to be.