Wednesday, February 25, 2026

The Psychology of Religion, Chapter 15: Spirituality

Humans have cognitive tendencies that make superstitious beliefs easy to generate—and hard to extinguish. Beliefs in spirits, ghosts, magic, luck, or fate guided by mysterious forces are so widespread across cultures that they are difficult to avoid noticing. The surface content varies wildly from place to place—local spirits, protective rituals, sacred objects, invisible dangers—but the underlying psychological grammar is familiar.

A core ingredient is pattern-seeking. The mind craves meaning, and when the world is uncertain or painful it will often manufacture meaning rather than tolerate ambiguity. This is not stupidity; it is ordinary cognition under stress. When people feel a loss of control, they become more likely to perceive patterns—even illusory ones—in the environment, and to treat coincidence as signal. Superstition can be emotionally satisfying precisely because it converts randomness into a story.

Stories, dreams, unusual experiences, and compelling anecdotes can then become socially transmissible. Once a few people begin to interpret events through a “hidden forces” framework, the framework spreads: it gives language to fear and hope, it creates a sense of specialness, and it offers the pleasure of explanatory closure. Coincidences become “signs.” Ambiguous perceptions become “messages.” A confusing life becomes a legible plot.

From a psychiatric point of view, there is also genuine individual variation in proneness to unusual, mystical, or numinous experience. Some people reliably feel awe, presence, synchronicity, and “spiritual certainty,” while others rarely do. This is shaped by personality and temperament, by culture and reinforcement, and by biology. One useful but imperfect metaphor is that some minds run with higher “gain”: experience arrives vivid and compelling, but with a greater risk that noise is interpreted as signal. Salience systems in the brain—dopamine is one relevant piece of that puzzle—are part of how humans decide what feels meaningful, and research on paranormal belief repeatedly circles around the study of such neurotransmitter systems.  

Many members of organized religions disparage “superstition” or free-floating “spirituality.” Yet in psychological terms, the differences are often of degree rather than kind. Organized religions tend to formalize these human tendencies into institutions: they standardize the stories, professionalize the interpreters, and link belief to group identity and obligation. “Spirituality,” in contrast, often keeps the intuitions while loosening the institutional grip. But both draw on the same human appetite for meaning, comfort, and narrative.

The Psychology of Religion, Chapter 14: Religion as a Business

Many religions and other spiritual practices operate partly like a business. There is marketing (proselytization, outreach), branding (symbols people wear on clothing or on necklaces), encouragement to be loyal to your brand, and criticism of other brands. But then there is also a financial commitment, leading to an organized financial structure. There is work to be done by members of this structure, with an ultimate goal—explicit or implicit—of retaining and expanding membership, eliciting volunteering efforts and financial contributions, and maintaining morale.

With some intensely tribal, high-commitment groups (fraternities are the obvious benign example, gangs the darker one), there can be an onerous initiation ritual. Social psychologists have shown that when people have to work hard, endure discomfort, or pay a steep price to join, they often become more loyal afterward—partly because the mind naturally tries to justify what it has sacrificed. Religions also commonly have initiation processes: potential members may be vetted, attend educational sessions, and then take part in some public ritual in which solemn commitments are made.

Sometimes, as with luxury business models, broad proselytization does not occur; instead, the “product” is restricted. Only a select few gain entry. In some traditions you need advanced membership—often taking years—before you are allowed to enter certain beautiful buildings such as temples, or partake in certain deep rituals. Sometimes only men are allowed into certain leadership roles or ritual spaces. These obstacles increase the allure and tend to attract people willing to contribute more commitment, time, and money. If everybody had a Rolex watch or a Gucci bag, it would cease to be as special; exclusivity is part of what makes the object feel “high-end.”

One particular feature of religion that resembles a corporate tactic is the elevation of belief alone—faith—as a key virtue. Belief without evidence is not merely tolerated; it is often praised. If a corporation could successfully propagate that idea, it would be extremely useful for marketing, since people would form loyalty to the brand without looking too closely at “reviews.” Doubt could be reframed as weakness, betrayal, or impurity. Meanwhile, “true believers” are rewarded: their status, trust, and esteem in the community rises in proportion to their loyalty.

In many cases religious institutions amass vast wealth: in property, buildings, and investments. In at least some prominent modern examples, credible reporting and public filings have described religious investment holdings on the order of tens of billions of dollars, with wider claims in some cases exceeding $100 billion—figures that are difficult to reconcile with the ordinary believer’s image of humble spiritual stewardship. And these structures often operate with significant tax advantages. In the United States, churches are generally treated as tax-exempt. In Canada, registered charities (including many religious organizations) are exempt from paying income tax while registered. 

And yet, some of the most insightful cautions about wealth come from within religion itself. One of the sharpest is the line attributed to Jesus (present in all three Synoptic Gospels): “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”

The Psychology of Religion, Chapter 13: to be a scholar, you had to study theology

Over history, many wise people who wanted to use their intelligence and talents to learn about existential, philosophical, or scientific topics—while also guiding or helping their communities—ended up studying religion. In many eras, the church (or more broadly, religious institutions) offered one of the most stable pathways to literacy, scholarship, social leadership, and public voice.

This created an important structural bias: if you wanted to become a scholar, you often had to pass through theology. It wasn’t simply that “smart people liked theology.” Rather, theology was pushed into the intellectual mainstream partly because it controlled the educational pipeline. If you wanted books, training, mentorship, libraries, credentials, or a platform to teach, preach, or write, you frequently had to operate inside an institution whose organizing framework was theological. In that setting, it is not surprising that many great intellectual leaders, creative people, and moral voices either embraced religious thinking sincerely, or at least spoke its language fluently—because that was the price of entry into the scholarly world.

Of course, this does not mean that the underlying doctrines were necessarily correct. It means that there was a selection effect: theology and scholarship were tightly coupled, so theology gained prestige from its association with learning. When a brilliant, educated, altruistic person was also a theologian (or clergy), observers could easily slide into a mistaken inference: their intelligence validates their religious beliefs. But the more cautious interpretation is that intelligent, thoughtful people are compelling—and often good for society—even when they hold unfounded beliefs.

There is another aspect to this bias that matters just as much: in many historical settings, if you were a great scholar and you wanted to challenge the theological framework publicly, you didn’t simply risk social disagreement—you risked losing the very venue that made scholarship possible. Sometimes the penalty was professional exile; sometimes it was censorship; sometimes it escalated to legal punishment or execution. This is not a minor footnote. It means that the historical record is not a neutral marketplace of ideas: the “intellectual mainstream” had guardrails enforced by religious authority.

Galileo is one of the clearest examples. His work was not a vague “anti-religious attitude”; it was substantive science: telescopic astronomy and arguments for the Copernican model, expressed publicly and persuasively in works such as the Dialogue Concerning the Two Chief World Systems. He was tried in 1633 and spent the rest of his life under house arrest. Whatever one thinks of the surrounding politics, the lesson for my argument is straightforward: a scholar’s survival, platform, and legitimacy could depend on staying within theological limits.

Giordano Bruno illustrates a slightly different version of the same phenomenon. Bruno was a philosopher with cosmological ideas—sympathetic to Copernican thought and willing to push toward an image of an infinite universe and a plurality of worlds—entangled with theological claims that authorities judged heretical. He was executed in Rome in 1600. The point is not to turn him into a simplistic martyr for “science.” The point is that the intellectual venue was governed by religious boundaries, and crossing those boundaries could be fatal.

And this pressure was not limited to cosmology. Reformers, translators, and dissidents—people working on questions of authority, conscience, textual interpretation, and the right to think aloud—were also punished severely in various times and places. (Even when their “project” was not a new scientific discovery, the underlying conflict was similar: who gets to define truth, and what happens to you if you disagree.)

To be fair, religious institutions also preserved and transmitted learning in many eras; the story is complicated. But it is precisely because the story is complicated that the bias matters: when one institution is both a guardian of education and an enforcer of "correct thinking," you will inevitably see a historical overrepresentation of scholars who were theologically fluent, and an underrepresentation of scholars who were openly theologically defiant. In some periods, the risk was not just social—it was life and death.  

There is even a modern echo of this. Today, most academic science operates in secular institutions with strong norms of open debate. Yet in some settings—religious universities, seminaries, ideologically bound communities, or authoritarian political climates—career paths and social legitimacy can still be contingent on affirming a doctrinal framework. Even when this is not enforced by law, it can be enforced by social sanctions: loss of community, loss of employment, loss of status, loss of belonging. The mechanism is the same as in earlier centuries, but usually softer: a belief system becomes part of the admission ticket to a valued intellectual or social world. And once again, this can create the misleading appearance that “the best minds endorse the doctrine,” when part of what is happening is that dissenters self-select out—or are pushed out.

So the historical association between theology and scholarship has often been non-causal. It is partly an artifact of institutional history: for long stretches of time, if you wanted to be educated, you had to study religion; and if you wanted to remain educated publicly, you often had to respect religion’s boundaries. That fact alone can make the intellectual prestige of religion look stronger than the evidence for its literal claims actually warrants.

The Psychology of Religion, Chapter 12: Benefits of Unfounded Belief

Another interesting—but potentially troubling—angle on all of this is that people can sometimes latch onto a belief system whose claims are plainly unfounded, and yet experience real improvements that they had not found otherwise. Those improvements can entrench the belief further, because the person now has lived evidence: “It worked for me.” The mechanism is similar to the self-deception-mediated dynamic Trivers describes, and similar to the way psychoanalysis could help people even when much of its causal theory was exaggerated or wrong.

For example, some people latch onto an extremely rigorous or bizarre diet with a completely spurious rationale, and yet end up stabilizing a prior eating problem or obesity problem. Often what makes the diet “work” is not the theory, but the frame: the diet becomes a totalizing structure, a rule-set, a ritual, a commitment device. Strong belief in the diet’s narrative can increase adherence—sometimes dramatically—especially when the belief is reinforced by “spiritual” practices, authoritative texts, a charismatic leader, and enthusiastic support from fellow adherents. The resulting improvement may have little to do with the supposed mechanism (“toxins,” “energy,” “inflammation caused by impurity,” or whatever the myth is) and much to do with ordinary behavioral ingredients: attention to quantities and timing, reduced ultra-processed foods, fewer calories, more routine, more social accountability, and sometimes more exercise. In other words, the distinguishing features of the theory may be fictional, while the behavior change is real.

It is tempting to treat these forays into false belief as harmless whenever they produce visible gains. But there is a dark side. Some dietary regimens are medically dangerous; some aggravate eating disorders; and many cultivate a loyalty to the framework that discourages critical thinking. When a person’s identity becomes fused with a belief system, they may reject better treatments when those treatments are indicated—especially if a setback is interpreted as evidence of insufficient “faith,” insufficient purity, or insufficient devotion. In addition, these frameworks often come packaged with community. People who join one cluster of unusual health beliefs can be pulled, by social gravity, into neighboring clusters: new spiritual doctrines, political identities, conspiratorial worldviews, and the subtle expectation of financial contribution—paid coaching, proprietary supplements, retreats, memberships. The incentives are often misaligned.  

And there is another distortion worth naming: we mostly hear from the success stories. The people for whom the diet failed, harmed them, or simply became an expensive obsession rarely become public evangelists. The community’s narrative therefore becomes skewed toward “miracles,” while the quiet attrition and collateral damage remains invisible.

Finally, just as in religions, the next step is often proselytizing. People who believe they have found salvation—whether dietary, medical, or spiritual—tend to recruit. They may pressure friends and family to “convert,” and disparage outsiders as ignorant or closed-minded. In the context of fad diets and alternative medicine, that can do real harm to public health.

So the point is not that false belief never “helps.” The point is that when it helps, it often does so through common human mechanisms—structure, community, meaning, identity, accountability—while smuggling in risks that are easy to deny and hard to reverse once the belief becomes an emblem of belonging.

The Psychology of Religion, Chapter 11: Evolution

The following is a lengthy discussion of evolutionary science. I think it deserves space here because it deals with the origins and diversification of life in a way that directly contradicts literal religious or mythological accounts of creation. It is certainly possible to remain religious while accepting evolution—many people do—but for some believers, evolutionary biology feels like an unacceptable affront to faith, because it replaces a story of intentional design with a story of natural processes unfolding over vast time.

There are experts in evolutionary science who can explain this far better than I can. Still, I want to set aside space for it in my own voice, because (1) the basic logic is not hard to understand, (2) the evidence is overwhelming, and (3) the emotional resistance to it often has very little to do with evidence, and a lot to do with identity, belonging, and sacred narratives—topics I’ve already been discussing.

What follows is a tour through the core mechanism (natural selection), a few common misunderstandings, the idea of speciation, and then a few “side corridors” that matter for this essay: cultural (memetic) evolution, sexual selection, and the uncomfortable fact that even religiosity itself is partly shaped by temperament and biology, not only by culture.
Natural Selection

Natural selection is the central guiding principle of evolutionary theory. The logic is profoundly simple. It requires only that we accept three basic facts:

Organisms vary (physically, physiologically, behaviorally).

Some variation is heritable (traits are influenced by DNA, even though environment matters enormously too).

Some traits affect reproductive success—not in a morally loaded sense of “deserving,” but in the literal sense that some variants leave more surviving offspring than others.

If a heritable trait increases the probability of leaving more surviving offspring in a particular environment, then—over generations—the population will contain more of that trait. If a trait reduces reproductive success, it tends to diminish. That’s it: differential reproduction plus heritable variation, iterated over time.

You can see the basic logic everywhere, from selective breeding in crops and animals, to antibiotic resistance in microbes, to the obvious family resemblance in both physical and psychological traits among human relatives. None of this requires the belief that genes are the only cause of traits; it requires only the admission that heredity is a major contributor.

A point worth emphasizing (because it is often misunderstood): natural selection is not about "improvement” in any moral or progressive sense. It is simply a filter that favors whatever works well enough in a local environment at a given time.

DNA replication is extraordinarily accurate, but not perfectly so. Across generations, there are small changes—mutations—introduced into genetic material. “Mutation” here does not mean “bad.” It means “change.” Most mutations are neutral; some are harmful; a few are beneficial in a given environment. In addition to mutation, sexual reproduction creates variation through recombination—shuffling existing variants into new combinations.

A crucial clarification: the useful statement in evolutionary theory is not that mutations are “random” in some metaphysical sense, but that they are not produced because they would be useful. In other words, variation arises without foresight; selection is the non-random sieve that preserves what works.

When small genetic changes happen to produce a trait that improves survival or reproduction in a particular environment—say, a slightly different beak shape that accesses food more efficiently—those variants can become more common. Over many generations, accumulation of such changes can produce substantial transformations, including changes in complex organs and behaviors.

Darwin’s finches in the Galápagos remain a famous entry point into this idea: different ecological niches favor different beak shapes. The underlying point generalizes: ecological pressures shape populations.

One reason evolution feels counterintuitive is that large organisms reproduce slowly relative to a human life. Big evolutionary changes can take thousands or millions of years, just as major geological or astronomical processes do. We don’t “watch” a canyon form or a star evolve within a single afternoon, but the evidence for those processes is still decisive. Evolution is similar: you infer long processes from converging lines of evidence.

Fossils are one line of evidence—imperfect, but immensely powerful. Fossilization is rare and biased toward certain environments and tissues, so the record will always be incomplete. Still, the overall pattern—order in time, transitions, branching diversification—fits evolutionary predictions. Many intermediate forms for major transitions are known, and “gaps” often shrink as new discoveries accumulate.

And importantly: many “intermediate” forms are living, not fossil—species that preserve traits that help us understand evolutionary branching, even though they are not literally our ancestors.

A clichĂ© objection goes like this: “Evolution says humans descended from chimpanzees.” That is not what evolutionary biology claims. The evolutionary picture is a branching family tree. Humans and chimpanzees are not ancestor and descendant; they are evolutionary cousins who share a common ancestor in the deep past.

The same logic generalizes outward. All living creatures on Earth are related in a vast genealogical sense: a branching history of common ancestry over deep time. For many people, this is not depressing at all—it is a source of awe, a kind of cosmic kinship.

One of the most beautiful things about modern evolutionary science is that it does not rely on any single line of evidence. Comparative anatomy, fossils, biogeography, embryology, and genetics all converge on the same branching structure.

At the genetic level, you can compare DNA or protein sequences across species. The patterns of similarity and difference allow reconstruction of phylogenetic trees that match what we see in fossils and anatomy. You can also use calibrated rates of genetic change (with many caveats and error bars) to estimate when lineages diverged. The key point for this essay is not the exact date of every branching event—it’s the scale: the story is unimaginably older than the few-thousand-year timeframes implied by literalist readings of sacred texts.

Over the short term, evolution often looks like shifting trait frequencies within a species. Over the long term, divergence can accumulate until populations become reproductively isolated—meaning they can no longer interbreed successfully under natural conditions. That is speciation.

Speciation isn’t always a sharp on/off switch. In nature it often behaves like a continuum, with partial compatibility, hybrid zones, and gradual divergence. A famous teaching example is the Ensatina salamander complex, often discussed as a “ring species”: neighboring populations can interbreed around a geographic region, while populations at the far ends of the chain have diverged enough that interbreeding breaks down. This is one of those cases that makes the underlying idea vivid: species boundaries can be the end-point of gradual divergence, not a magical dividing line.

One thing I want to state explicitly (because people often get confused here): even if a behavior or trait has evolved, that does not mean it is morally “right,” or that we should accept it. Evolution describes how traits spread; it does not tell us what we ought to value.

Evolution also does not produce “perfect design.” It modifies what already exists. That is why we see compromises and oddities in anatomy—structures that work well enough, but are constrained by history. Humans are full of such examples. The moral point for me is simple: if we want to become more humane, we have to build culture—rules, norms, education, and institutions—that restrain some evolved tendencies and cultivate better ones.

A parallel kind of evolutionary logic shows up in culture. Dawkins coined the term “meme” to describe cultural units that replicate—ideas, phrases, rituals, styles—that spread, vary, and undergo selection in minds and communities.

Language evolution is a good example. Languages branch and drift. Over time, groups can become mutually unintelligible. You can even reconstruct “family trees” of languages and infer common ancestors like Proto‑Indo‑European, spoken roughly 6,000 years ago, which eventually diverged into tongues as distinct as Hindi and Norwegian. Even among close relatives, the drift is measurable: English and German began to diverge significantly around the 5th century CE with the Anglo-Saxon migration to Britain, while Dutch and High German separated later, roughly between the 6th and 8th centuries, driven by the High German consonant shift. These are not static categories but snapshots of a flowing river; what was once a dialect becomes a distinct language--analogous to a biological species--given enough time and separation.

Ironically, religions also behave this way. Doctrines split. Schisms occur. New denominations form. We see this clearly in Christianity: the Great Schism of 1054 formally severed the Eastern Orthodox and Roman Catholic churches; later, the Reformation of 1517 splintered Western Christianity into Catholic and Protestant branches. That Protestant branch fractured further into Lutherans, Calvinists, and Anabaptists, and continued splitting into modern groups like Methodists and Pentecostals. This fragmentation continues today with astonishing speed. In just the last few years, the United Methodist Church has undergone a massive rupture over social doctrine, while evangelical congregations frequently splinter over narrower debates—ranging from the role of women in leadership to specific interpretations of theology—creating new networks almost overnight. At times, this drift is total. Newly formed divergent groups can become so distinct as to have no further contact or agreement with each other, a process analogous to the formation of distinct species in the biological world. This pattern is universal: Islam experienced its primary rift between Sunni and Shia almost immediately after the death of the Prophet Muhammad in 632 CE, and Buddhism diverged into Theravada and Mahayana schools roughly around the 1st century BCE. The “family tree” metaphor is not perfect, but it is illuminating: religions do not descend from heaven fully formed; they evolve within history.

And this matters psychologically: people sometimes treat their local, historically contingent version of a faith as if it were timeless and universal—when in reality it bears the fingerprints of cultural inheritance, conflict, politics, and geography.

Another important evolutionary idea is sexual selection: traits can spread not because they help survival directly, but because they affect mating success. The peacock’s tail is a classic case—beautiful, costly, and dangerous, yet selected for because it becomes desirable within the mating preferences of the species.

There is a long debate about what sexual ornaments “signal.” Some theories emphasize indirect signals of health and robustness. Richard Prum argues, persuasively in my view, that sexual selection can also reflect an evolving aesthetic culture—preference itself becoming a driver of biological change.

This is relevant to humans not because we are peacocks, but because it reminds us that biology is not only grim survival calculus. It includes courtship, display, aesthetic preference, and the strange feedback loops between bodies, brains, and culture.

Religiosity itself—along with the tendency toward mystical or paranormal belief—is not only cultural. Twin studies suggest that as people reach adulthood, genetic differences contribute meaningfully to individual differences in religious values and practices, while shared family environment tends to matter less than it does in childhood.  Genetic differences account for a portion of the variability in religiosity,  alongside development, culture, peer groups, and life events.

It’s also worth noting that intelligence and religiosity show a small-to-moderate negative association on average in some large literatures, particularly for more literalist or fundamentalist belief styles. This is not a claim about any individual believer (many brilliant people are religious), but about population-level tendencies and the cognitive styles that different religious environments reward or discourage.

A related trait dimension that matters here is schizotypy: a spectrum of unusual perceptual experiences, magical ideation, and pattern sensitivity. At mild levels it can correlate with creativity and a “poetic” mode of experience; at extremes it can shade into pathology. A mind more prone to vivid pattern-finding and unusual salience can be more vulnerable to interpreting internal experiences as external revelations.

And moral psychology matters too. Some people are more temperamentally drawn to moral foundations like loyalty, authority, and purity—dimensions that many religious communities strongly emphasize. Others prioritize harm reduction and fairness. These differences shape who feels “at home” in particular religious cultures, and who experiences them as suffocating.

I’m aware that this chapter is long. But I think the payoff is worth it: once you really absorb the logic and the evidence for evolution, it becomes hard to view literal creation myths the same way again. And yet—this is crucial—it does not require cynicism. For many people, it opens a door to a deeper, cleaner awe: a reverence for reality as it actually is, not as we once wished it to be.