Showing posts with label Religion. Show all posts
Showing posts with label Religion. Show all posts

Saturday, February 28, 2026

The Psychology of Religion, Chapter 29: Conclusion

In conclusion, religious beliefs—and organized group religion in particular—have been a part of human civilization for thousands of years. Culturally, religion can have many benefits: it can help communities come together to celebrate and to grieve, to contemplate morality, to show gratitude, and to meditate. Religious faith is consolidated by human tendencies to be loyal (to one’s ingroup, to one’s family, to longstanding beliefs learned and practiced since childhood, and to idealized figures), and by the human tendency to internalize (for many believers, God functioning as an internalized representation of perfect  goodness, power, or protection). Religions are further consolidated by many enjoyable and meaningful human cultural activities: a lot of the world’s greatest art, music, literature, and architecture is rooted in religion. Religions also help many people cope with the deepest, most painful, and most frightening experiences of life, such as facing the deaths of our loved ones, or facing one’s own mortality. And religious services can be a medium through which people meet friends or potential partners, sometimes with a better-than-average chance of meeting someone with whom they might share values.

Yet religions and other spiritual or mystical systems hold beliefs that are not true. These beliefs are often taken literally, and dogmatic adherence to them—public profession of them, loyalty to them—is frequently required as a sign of belonging. Some of these fictions may be inconsequential much of the time; many people can live decent lives without a precise understanding of biology, astronomy, geology, genetics, or ancient history. But the darker side has to do with the extremity of group loyalty: ingroups and outgroups form, and religion becomes an emblem of identity that can seed mistrust, exclusion, and maltreatment of outsiders. Dogmatic pronouncements can also become oppressive to the group’s own members, particularly when people are pressured into literalistic interpretations of sacred texts, or when “faith” becomes a moral duty rather than an honest way of grappling with uncertainty.

The lack of accurate education about the way the world works is finally detrimental to any individual, group, or nation. It is like a pilot of an airplane who doesn’t understand how the engines work, and assumes that planes fly due to magic. Most of the time this may not seem to make much difference to the safety and navigation of the plane—until the weather changes, until something unexpected happens, until you need a sober understanding of what is real in order to respond well. A culture can coast for a long time on comforting stories. It is when conditions become difficult that false models show their true cost.

I think it is valuable that we live lives in which we strive toward understanding deep truths—about ourselves and about the world—and it is just not satisfying to settle for fictional belief, even if these fictions might comfort us. It is particularly troubling to me for children to be indoctrinated with dogmatic beliefs, especially if they are not exposed to accurate information about the world in terms of science, history, and culture. And it is troubling that there should be public financial support for religious groups, in the form of tax breaks and other privileges, unless these are clearly restricted to the charitable components of religious outreach rather than the promotion of dogma or political influence.

We certainly know that holding religious belief is not necessary to be a moral, kind, loving, gentle, humble person. In fact, in some cases religious beliefs can obstruct these positive qualities and add to the world’s problems. And it is possible to face the most difficult aspects of human life—grief, loss, pain, and death—while behaving honourably, peacefully, and nobly, without requiring belief in some eternal reward. In fact, moral behaviour done for its intrinsic good, rather than being motivated by fear of punishment or hunger for reward, seems to me a deeper ethical foundation. Such a stance does not require religion, but it does require effort: working on living well, striving to become a better person, and trying to be a stabilizing and humane influence on others.

In discussing religion, it is important to empathize with people who hold religious or spiritual beliefs. Respectful understanding of how and why people believe as they do matters—especially if the goal is genuine dialogue rather than tribal combat. It is also valuable to search for common ground, particularly with regard to values. Most religious people value integrity, loyalty, altruism, compassion, truthfulness, lawful behaviour, fairness, family, care of children, hard work, and the willingness to stand up for what is right even at risk to oneself. In a discussion about religious belief, it can help to emphasize these shared values, because it appeals to unity rather than escalating the feeling that one is an outgroup member disrespecting a sacred tradition.


And that brings me back to the question that started many of these reflections: how to face transience without leaning on supernatural reassurance. I sometimes think of simple things: a firework, a meal, a fire, a cup of hot tea. These things are transient; they disappear. Yet their constituents are still present—they have merely dissolved and dispersed into the surrounding space in a different form. The structure ends; the ingredients remain, rearranged. We can’t expect the cup of tea to survive unchanged forever, and we can’t expect the firework to glitter permanently. In fact, it is normal—and even required for its enjoyment—that it be transient. A lot of religion tries to deny this, or to soften it with a story about eternity. I think there is another path: to accept that things end, to grieve honestly when they end, and still to love them fiercely while they are here.

There are examples of keeping the healthiest aspects of religion—the focus on values, morality, kindness, altruism, charity, humility, meditative self-care, self-improvement and sincere amends-making, caring for and accepting care from community members, enjoying beautiful music, art, and architecture, with a focus on gratitude and reverence—while not becoming captive to narrow dogma, false beliefs about science, or denigration of outsiders. Some interfaith movements aim to cultivate peace and mutual respect across traditions. Some branches of modern religion are simply less dogmatic and more open to science and cultural pluralism. And many people, religious or not, find ways to live with moral seriousness and spiritual depth without insisting that myths must be treated as literal facts.


The Psychology of Religion, Chapter 28: Religion and Common Knowledge

One of the most useful ways to understand religion is to treat it as a way humans achieve social coordination. Steven Pinker’s recent work on common knowledge offers a sharp insight about this. “Common knowledge” is not merely that many people know something; it is the further (and crucial) fact that everyone knows that everyone knows it, and everyone knows that everyone knows that everyone knows it—an infinite regression that human beings handle with surprising ease in daily life.

This strange cognitive capacity is not just a philosophical speculation. It is a practical tool that makes civilization possible. In Pinker’s framing, common knowledge generates coordination: it lets people converge on shared conventions (driving on the right, accepting paper currency, showing up to the same meeting place at the same time) without needing a central enforcer to micromanage every choice. But the same logic also explains the “shadow side” of social life: why people avoid saying obvious things out loud, why hypocrisy can be stabilizing, why shaming mobs ignite, why revolutions seem to erupt “out of nowhere,” and why public rituals—of all kinds—have such force.

Religion is, among other things, a machine for manufacturing common knowledge. Private belief is psychologically real, but socially weak. It does not coordinate strangers. A society cannot run on invisible beliefs that no one can observe. What rituals do—prayer spoken aloud, communal singing, congregational responses, public confessions, initiation rites, sacred calendars, distinctive clothing, a crucifix necklace, shared dietary rules—is turn inner states into public signals. They convert “I believe” into “we all see that we believe,” and then into “we all know that we all see that we believe.”

This matters because people are extremely sensitive to the social risk of being the odd one out. If belonging is the reward and ostracism the punishment, the most dangerous condition is uncertainty: Do they believe this? Do they know I believe it? Do they know I’m wavering? Rituals collapse that uncertainty. They make allegiance visible. They create an emotionally saturated version of a contract—less like signing a document, more like standing under a spotlight and letting the group watch you sign with your whole body.

This is why religions place such emphasis on public acts. Private prayer is meaningful to many, but communal prayer is socially decisive. Singing alone is esthetic; singing together is social glue. An individual moral intuition is fragile; a moral intuition recited in unison becomes harder to question, because questioning it is no longer a solitary cognitive act—it is a social offense. And once a belief is entangled with common knowledge, its truthfulness often becomes secondary to its coordination-value. The belief may be fictional, but it is socially efficient.

Common knowledge is not only about shared content; it’s also about credible commitment. A cheap signal is easy to fake. A costly signal is harder to fake, which is why social groups are so attracted to cost. High-demand religions—those that require many hours of weekly participation, tithing, conspicuous behavioural restrictions, sexual policing, or public displays of devotion—often look irrational or excessive from the outside. But through the common-knowledge lens, some of this “irrationality” is exactly the point. If a group can get you to do something that is inconvenient, stigmatizing, or effortful, it has a way to distinguish true loyalists from casual tourists. The sacrifice itself becomes evidence.

This is also why initiation rituals recur across wildly different human groups, religious and otherwise. The ordeal is a signalling device: “I paid a price to be here; therefore I must value being here; therefore I am one of you.” The group sees the price, and the price creates common knowledge of commitment.

Pinker makes an additional point that is deeply relevant to religious life: people do not always want knowledge to become common knowledge. They often go to great lengths to ensure that even if everyone privately knows something, no one is forced to publicly acknowledge it. Many communities function because people collude, tacitly, in not pressing certain questions to the point of explicitness. The moment a doubt is spoken plainly, it stops being a private flicker and becomes a social event. It demands response. It forces alignment. It threatens the shared story.  

I can't help but think there are many private doubts about very alarming world events that members of large partisan groups are having in parts of the world today, but few people within these communities have the willingness to speak their doubts out loud, since they would risk losing the support of their community.  It reminds me of the fairy tale The Emperor's New Clothes--this story is especially apt to our modern times, since many of truths of the current world situation are so obvious that a very young child would be able to understand them clearly, and perhaps the innocence and humility of a child's voice is exactly what is needed to be convincing to those currently unwilling to speak the truth.  Furthermore, one definition of heroism, in my opinion, is having the willingness to speak one's private knowledge of the truth to a group that might at least initially reject you for it.  

In practice, a religious community often survives not by answering every question, but by managing which questions are ok to ask.  

Religious spectacles—miracles, exorcisms, dramatic conversions, speaking in tongues, revival meetings—are not just theological events. They are high-powered signalling events. They take a private feeling (“I felt something”) and turn it into a public fact (“We all saw her fall, shake, cry, speak strangely, rise transformed”). The group witnesses a performance that is emotionally contagious, and the witnessing itself becomes part of the evidence.

The crucial move is not that an unusual event occurs, but that everyone sees everyone seeing it. This is how common knowledge is made at high speed: a shared spectacle that forces a shared interpretation, or at least a shared posture. If you stand in the room and do not respond, you are not merely unconvinced—you are socially deviant. The power of the event is partly the power of mutual surveillance.

This is also why sceptical outsiders often have a dual reaction to certain public charismatic performances: amusement at the apparent absurdity, mixed with unease at the real influence such spectacles can have when they become fused to political power. The performance may look ridiculous, but its social function is very serious: it converts theatrical intensity into tribal certainty.

A frequent defence of religion is that it provides moral structure. That claim is not wholly wrong—at least at the level of group coordination. A community that repeats moral language weekly, that teaches children shared scripts for gratitude, restraint, charity, and self-scrutiny, will often produce decently socialized people. The group is continuously manufacturing common knowledge about what counts as admirable, shameful, or forbidden.

But common knowledge cuts both ways. It can coordinate kindness; it can also coordinate cruelty. When a group makes contempt for outsiders common knowledge—through sermons, jokes, or political messaging—the moral atmosphere shifts. People become emboldened. What was privately felt becomes publicly permitted. The difference between a prejudice that quietly lingers in someone’s mind and a prejudice that is openly shared is enormous: the second is actionable. It becomes policy. It becomes bullying. It becomes violence with a clean conscience.

We see today a rise in bullying and prejudice in part because of this common-knowledge effect--various groups are coordinating a social norm by which prejudicial thinking is shared by a community, and becomes an emblem of partisan group involvement.  

The frightening historical efficiency of religious persecution is, in part, a story about common knowledge: it is easier to harm others when the group has made the justification publicly shared, ritually repeated, and socially rewarded.

Common knowledge is not only created in sanctuaries; it spreads through networks. Here the work of Nicholas Christakis is a useful complement. His research argues that behaviours can “cascade” through social networks—spreading from person to person to person, sometimes out to several degrees of separation. Human behaviour is not merely individual choice; it is often contagious.

Religion has always understood this intuitively. Congregations are network structures: friendship graphs with rituals attached. Conversion is rarely solitary; it is more often a relational event. People move toward belief because a trusted person pulls them toward a group in which belief is common knowledge. Doubt spreads similarly: not primarily through reading an argument, but through watching someone you respect begin to treat the sacred story as optional. The moment that becomes visible, it becomes socially thinkable. It becomes “sayable.” It becomes a potential cascade.

This is one reason religious authorities, across centuries, have been so preoccupied with public dissent. Private doubt is manageable; public doubt threatens contagion.

The “New Atheist” era often tried to treat religion as if it were primarily a set of factual claims—claims that could be refuted, one by one, by geology, evolutionary biology, textual criticism, or cosmology. Those refutations matter. But they often fail to persuade for the same reason a spreadsheet rarely defeats a love affair: the object is not merely an idea; it is a social world.

If religion is partly a technology for manufacturing common knowledge—about belonging, virtue, status, and identity—then a purely evidential critique will bounce off the surface for many people. The deeper structure is social. To leave a religion is not only to change one’s beliefs; it is to risk becoming unintelligible to one’s own tribe. In the harshest cases, it is to risk exile. The mind treats that as a danger.  

This also helps explain why political leaders so often perform religiosity even when their lives show little evidence of it. Performance creates common knowledge. A staged photo with a sacred symbol is not primarily addressed to God; it is addressed to the crowd. It signals, “I am one of us,” and it invites the crowd to become complicit in acting as if that were obviously true. Once the performance becomes common knowledge, dissenters inside the coalition pay a social price for pointing out the obvious.

Therefore, the secular task is not only to critique supernatural claims. It is to build non-supernatural forms of common knowledge that can do some of the same social work. Something like this is already happening in the modern world. People gather around causes, institutions, professions, civic rituals, scientific identities, mutual aid networks, even exercise cultures. These can be silly or beautiful; freeing or authoritarian. The point is not that secular life lacks ritual. It is that secular rituals are often fragmented, unstable, less grounded with deep roots going back into one's family tree and ethnic culture, and less explicitly oriented toward moral formation.  The art and esthetics of the secular world is also far less-well developed than that of the religious world.  

Religion persists not only because people are credulous or fearful, but because religion solves hard social problems. Pinker’s concept of common knowledge helps explain how it solves them—sometimes in ways that elevate human life, sometimes in ways that deform it. And once one sees religion as a social technology of visibility—of signals, rituals, and shared scripts—one can critique it more honestly: not as a childish mistake, but as an ingenious human invention that exacts a price.

The deeper question is whether we can build a life, and a society, in which the best human goods that religion has traditionally coordinated—community, moral aspiration, awe, mutual care—can become common knowledge without requiring that we pretend, together, that comforting fictions are facts.

The Psychology of Religion, Chapter 27: Consciousness

There are many unanswered questions about how the universe works. Part of the wonder of science is appreciating that for every advance in understanding, there are always new horizons of the unknown to explore further.

I find that one existential frontier in understanding has to do with consciousness. Regardless of the various physical explanations about why we have conscious, subjective experience (of memory, drives, sensations, emotions, etc.) it remains truly miraculous that this occurs. It is true that consciousness exists on a continuum; it has definitely been sculpted by evolutionary forces, and is subject to a lot of variation, with diminished or gradually altered consciousness caused by sleep, fatigue, anesthesia, substances, neurological disease, etc. It is interesting to consider whether consciousness could be a property of nature itself, as opposed to a property only of a neurological system such as the brain. Some great scientists such as Roger Penrose have theorized about the mechanisms of consciousness; while I think such theorizing is interesting and worth following, I'm not sure that the result would impact my opinion of this matter too much. Even if there was a precise physical explanation, it does not lessen the miraculousness of it.

I find consciousness even more miraculous than "free will" since even if the universe was entirely deterministic or superdeterministic, there would still be human consciousness, which is something which deserves a feeling of wonder and awe. Some people would say that the phenomenon of consciousness is a manifestation of the divine -- and I guess I'd have to be ok with that, perhaps even as a foundational definition of the word "divine."

The Psychology of Religion, Chapter 26: religiosity, narcissism, and obsessiveness

The combination of religiosity with narcissistic traits is not rare, and it can lead people to insinuate—or directly assert—that their beliefs, their culture, and their moral grounding are simply better than those of people outside their faith. Boastfulness, arrogance, and self‑righteousness then function as a way of belittling others. Traits like this are sometimes rewarded inside a shared belief system, especially when confidence is mistaken for virtue. Once again, this violates religion at its best, which (in many traditions) emphasizes humility, kindness, and respect for outsiders.

Sanctimony is a related phenomenon: moral language used not primarily to understand right and wrong, but to signal superiority, to enforce conformity, or to punish dissent. In its mildest form it is simply performative piety; in its harsher forms it becomes a social weapon—one that can make ordinary people feel small or wrong.

Obsessiveness, as a personality style, refers to rigidity and narrowness, with intolerance of shades of gray, and a tendency to judge others harshly for deviations—large or small. (This is closer to obsessive‑compulsive personality traits than to OCD; it’s about rules and control more than about unwanted intrusive thoughts.) When this "Pharisaical" style fuses with religion, it can create families and communities where people live in a chronic state of being watched, measured, and morally scrutinized. The atmosphere becomes tense, cautious, and punitive—more about avoiding wrongness than cultivating goodness. Again, this runs against religion at its best, which repeatedly elevates higher values—love, mercy, generosity, humility—above rule‑keeping for its own sake.

To be clear, these traits are not “religious” traits; they are human traits. But when combined with religion they can often grow, or masquerade as piety.

The Psychology of Religion, Chapter 25: Speaking in Tongues

Some religions feature unusual behaviours that are accepted as manifestations of divinity. One example is glossolalia (“speaking in tongues”). Every cultural group has rituals that symbolize transcendence or divine intervention somehow, but it is concerning in modern times that people would treat this as a literal case of God “speaking through” someone, rather than as a human psychological and social phenomenon.

So what do we actually know about glossolalia? It usually isn’t the dramatic idea some imagine—suddenly speaking a real foreign language you never learned. Instead, it’s speech-like vocalizing: it has rhythm, emotion, and a kind of “word-like” flow, but it doesn’t reliably carry stable meaning or grammar the way a normal language does. When linguists study recordings, they tend to find that it draws heavily on the sounds and speech habits the person already has in their ordinary language—almost like a voice improvisation that feels like language, without functioning as one in the usual sense. When glossolalia happens in a context where it is expected, taught, and socially supported, it looks like a learned trance or skill—comparable to hypnosis, flow, or dissociation.

One can find examples online—there are widely circulated clips of a high-profile “faith leader,” close to a major political figure, performing “tongues” in public. I think a lot of people seeing this for the first time have a mixed reaction: perhaps, with a nervous smile, the thought "how can anyone take this seriously?" followed by some discomfort, and then a sharper concern once it lands that the performer has a large following of fervent supporters, and has mainstream political influence. It is deeply ironic that a communicative tool which does not carry any semantic meaning can be so persuasive to otherwise logical observers.

From a psychiatric point of view, glossolalia can be understood as a particular kind of altered attention state that can be learned, practiced, and performed. Put someone into the right mix of conditions—music, group emotion, high expectation, authority cues, shared language about the sacred—and a person can produce vocalizations that feel deeply meaningful. The speaker may experience it as surrendering control; the group experiences it as proof that something “beyond” is present.

This is where the social function matters most. Like “miracles,” and like behavioural restrictions that visibly mark membership, glossolalia can work as a signal: it makes the group feel special, chosen, and close to the divine in a way outsiders “don’t get.” That feeling is intensely bonding. It strengthens loyalty, rewards conformity, and makes doubt feel not merely intellectual but socially dangerous—almost like betrayal. The experience itself becomes the evidence, and the shared intensity becomes the glue.

Of course, the same machinery can be used for darker purposes. A leader who is skilled at spectacle and emotional orchestration can use these displays as persuasion technology: not by offering reasons, but by creating awe, certainty, and a sense of “we are witnessing the sacred.” The danger is not the oddness of the behaviour; it’s the way the resulting belief and allegiance can be redirected into real-world authority—sometimes including political authority, or as a tool to obtain financial donations—under a banner of divine mandate.

Friday, February 27, 2026

The Psychology of Religion, Chapter 24: Behavioural Restrictions

In some cases, religious groups prescribe particular foods, particular styles of dress, and particular behavioural expectations that are only loosely related to a moral issue—if they are related at all. Sometimes these practices can be understood as ordinary cultural variations with obscure origins. But often there is a sense that the rules are rigid and imperative, such that veering away from them is treated as an offence—either against the religious community or family, or against God. At times these restrictions make it difficult to live freely or comfortably in modern society.

One major function of these rules, in practice, is their signalling value: they remind others (and even oneself) of group affiliation and loyalty. This is comparable to other mechanisms groups use to bolster cohesion. When there are visible styles of appearance and behaviour that clearly mark membership, it becomes easier to find fellow members—and easier to be suspicious of outsiders. Over time, people can become fond of these behavioural symbols. They can evoke powerful feelings associated with the religion, and can function like wearing a ring with special significance every day and night for years, beginning in childhood. People may then feel uneasy or even guilty without it, and feel relief when they encounter others wearing the same symbol.

But if the “ring,” so to speak, becomes massive and cumbersome—if it begins to hinder ordinary life—then what once felt meaningful can become a kind of burden. (It starts to resemble the peacock’s tail: a costly display that signals loyalty, but at a real practical price.)

We see similar dynamics in modern culture in many settings—uniforms, subcultures, and corporate branding. Often these are harmless variations. The darker side appears when people do not wish to participate, when the rules become tools of control, or when symbols are used to suppress ordinary human behaviour—and when the person faces rejection or punishment from peers for noncompliance.

A related dark side of religious dogma is doctrine-based condemnation or discrimination against people whose lifestyles are not endorsed by the group. Often, at root, this is an ordinary human tendency—present in many non-religious settings as well—to exclude or denigrate people who are different, even when they are not harming anyone. But the best of religious texts call people to rise above this: to be inclusive, non-judgmental, and unfailingly loving toward everyone, not only toward those who share the same beliefs or lifestyle. There are various Biblical stories, for example, of reaching out in a loving, accepting way to members of groups that were widely vilified in their own time.

The Psychology of Religion, Chapter 23: Eschatology

Many religions have a view of the “end times”—what happens after death, and, in some traditions, how history itself will end. This is called eschatology. In some communities there is an almost excited anticipation of the world’s ending, paired with the idea of a glorious ascent of the worthy up to heaven (there’s that spatial metaphor again, taken quite literally by many, as though heaven must be “upwards”). Of course, those with this view usually assume they will be among the worthy. In turn, some people cultivate a kind of passive resignation about trying to improve the world’s problems: they say these are the “end times,” so why bother. And to some degree this kind of thinking can shape how people relate to society and politics—sometimes pulling them away from the work of changing the world.

I realize, of course, that eschatology doesn’t always produce passivity; in some forms it can motivate people toward reform or activism. But when apocalyptic belief becomes an excuse for disengagement—or an indulgence in catastrophe—it becomes a bleak and cynical example of what happens when dogma is taken literally. At its darkest, it can spill into extreme behavior, such as the Heaven’s Gate mass suicide in 1997. Even if the world were ending, it seems profoundly dishonourable to adopt passive resignation—let alone a smile of anticipation—about helpful action. It would be like watching a burning building with no attempt to help the people trapped inside, quietly nodding to yourself that heaven is getting closer.

I think most of us would agree that the most noble and beautiful actions humans are capable of are helpful and altruistic: working to improve a situation even when it is bleak or seemingly hopeless. A truly noble person would not be motivated by thoughts of a glorious heavenly reward upon death; they would be motivated to do good because of the intrinsic goodness of the action itself.

The Psychology of Religion, Chapter 22: Heaven and Hell

Many religions have concepts of Heaven and Hell: Heaven an eternal state of perfect happiness, and Hell an eternal state of punishment. Religious doctrines often advise that people live appropriately during their lifetime on earth, and after they die they will be judged and sent to one place or the other. In some doctrines, the criteria are not even that you live a good life (for example, to be kind, to not hurt others, to contribute to society, to make the world a better place, etc.) but rather whether you profess belief in a very particular way. Thus, one could be the kindest, most helpful person in human history, but still go to hell if the appropriate beliefs are not endorsed. Or one could commit the worst atrocities in history, and just be an all‑round hurtful person, yet go to heaven afterwards if the appropriate beliefs are endorsed.

This concept functions as a powerful engine of group affiliation using a combination of threat and reward. It is like a company offering permanent safety and support if you sign a lifetime membership, agree to promote the brand, and guarantee not to deal with competing companies. But the same company would also threaten to ruin you permanently if you broke the deal. There would be frightening rules in the contract, such that the act of challenging company policy would be branded with words like “heresy” or “apostasy,” discouraging anyone from questioning the status quo.

Such a system is in contradiction to the spirit of fairness, grace, and justice—the striving toward mature morality—present in religious doctrines at their best. An infinite punishment for a finite set of crimes does not make sense. And the idea of punishing someone not for a crime, but for having an idea, belief, or thought that does not conform to a prescribed norm, is contrary to most people’s concept of a healthy society, and contrary to the “bill of rights” ideals that many of us—religious or not—value highly.

In the world, on average, roughly two people die every second—about 7,200 deaths per hour, and on the order of five million per month. Only a fraction of these people follow any one particular religious belief system. Therefore, if one holds a strict doctrine of Hell tied to a strict interpretation of “correct belief,” it would follow that thousands of people every hour—including many who lived gentle, kind, generous lives—would be banished into eternal punitive suffering because they did not endorse the right beliefs.  Conversely, many who behaved cruelly all their lives could receive an infinite reward if they endorsed the correct beliefs at the last moment.  Imagine an all-powerful divine creator, pushing about one person every second--many of them kindly elders who simply didn't happen to endorse the appropriate beliefs--into a flaming inferno. 

If one truly believes this is the fate of countless people, one would be forced into a grim psychological choice: either adopt indifference to unimaginable suffering, adopt a horrific view of how reality works, or devote one’s life to converting as many people as possible so as to save them from hell. It would not make sense to devote one’s life to rescuing people on a smaller scale (being a firefighter, a physician, a therapist, a humanitarian worker), since this would distract from the colossal task of saving people from an infinitely worse fate than any earthly accident, illness, or war could impose. Proselytizing would seem to be the only fully rational altruistic activity. And if you wanted to “save the most people efficiently,” you would focus your efforts on those with shorter life expectancy, since their impending eternal suffering would arrive sooner. If one’s own friend or child strayed from the perceived correct religious involvement, it would be understandable—within this belief system—to view this as the most horrifying contingency imaginable, perhaps even more devastating than losing them to illness, assault, or accident, because the imagined suffering would be permanent.

This is one reason the Heaven-and-Hell framework is so morally destabilizing. It incentivizes fear, coercion, and tribal control, while undermining the best ethical themes that religions also sometimes teach: compassion, humility, grace, and love.

There is a sentiment attributed to Mother Teresa that I find ethically beautiful: that if Hell existed, the truly loving response would not be triumph or indifference, but a willingness to comfort those who suffer there. I think we should all strive towards such transcendence of character.


The Psychology of Religion, Chapter 21: Historical Atrocities

Humans have engaged in all manner of atrocities, and despite the horrors of the past century, we see repeatedly—across earlier centuries as well—how easily cruelty can be normalized, ritualized, and justified. The human capacity for harm is ancient. What is especially sobering, though, is how often major institutions—including major religions—can make cruelty feel righteous.


Many historical atrocities have occurred under the banner of religion, especially when religious identity fused with conquest, state power, or tribal domination. Charlemagne’s campaigns against the Saxons (772–804 CE), for example, fused military conquest with coerced Christianization; forced conversion was backed by severe legal penalties, and there were episodes of mass killing in the course of suppressing Saxon resistance, most notably the Massacre of Verden in 782, where 4,500 Saxon prisoners were reportedly executed in a single day. 

The Crusades (1095–1291) likewise included mass slaughter justified in explicitly religious terms: the Rhineland massacres of 1096 saw the destruction of Jewish communities in Speyer, Worms, and Mainz by crusader mobs, and the Siege of Jerusalem in 1099 ended with the indiscriminate mass killing of Muslims and Jews within the city walls.

The Thirty Years’ War (1618–1648)—driven in significant part by religious divisions between Protestant and Catholic states in the Holy Roman Empire, became one of the most devastating catastrophes in European history. Ending with the Peace of Westphalia, the conflict resulted in deaths in the millions, decimating up to a third of the population in some German territories, many due to famine and disease rather than battlefield combat, and leaving a legacy of psychological trauma and social ruin. 

The Spanish Inquisition (established in 1478 and lasting until 1834) created a terrifying machinery of coercion and intimidation, with religious motives explicitly invoked; the exact numbers are debated by historians, but the core point is not: it was a system designed to enforce conformity (targeting Jewish conversos and later Protestants) through fear, punishment, and (in many cases) execution.

Colonial movements in more recent centuries often deployed religious language—“civilization,” “salvation,” missionary uplift—as moral cover for economic extraction and domination. The Congo Free State terror under Leopold II (1885–1908) is one of the most infamous examples of colonial exploitation and brutality, resulting in the deaths of millions through forced labor and systemic violence. 

The transatlantic slave trade and slavery (spanning roughly the 16th to the 19th centuries) were likewise justified by many religious leaders and institutions in their own time (often citing the biblical “Curse of Ham” as a theological rationale), even as other religious figures became central to abolitionist movements. The point is not that religion uniquely causes exploitation, but that it has repeatedly been recruited to sanctify it.

The same pattern appears in Canadian history. “Christianization” was one motive—alongside state assimilationist policy—behind the Residential School system (which operated federally from 1883 until the last school closed in 1996). In this system, more than 150,000 Indigenous children passed through church-run, state-funded institutions characterized by coercion, cultural destruction, and extensive abuse, with many children dying and records often incomplete. 

The Spanish conquest of the Americas (beginning in 1492 and intensifying with Cortés's campaign against the Aztecs in 1519) similarly involved catastrophic Indigenous death and cultural devastation. While infectious disease accounted for much of the mortality, religious institutions were at best entangled with the colonial project and at worst active participants in its dehumanization, often reading the Requerimiento—a demand for submission to the Pope and Crown—to uncomprehending Indigenous populations before launching attacks.

Of course, in human history, violence and atrocity have occurred without religion, and secular ideologies have also justified horrors. But it is very clear that religions have not been reliably protective against the worst destructive drives of humanity. Worse, religious certainty has often been deployed to justify abuse, discrimination, and war—to lend the aura of sacred duty to actions that would otherwise look like what they are: cruelty, domination, and theft.

The Psychology of Religion, Chapter 20: Religious Abuse

Abuse is unfortunately common. It affects every type of community and family. I have seen numerous cases in which religious texts or elements of religious faith were used as tools to abuse innocent children. (To protect privacy, identifying details have been altered, and some examples are composites.)

This includes one of the worst cases of emotional abuse I have seen in my career.

In this case, a teenager with a gentle, intelligent, altruistic personality—living in an affluent household—was subjected to forced “family sessions” late at night. She would be made to sit for hours in her bedroom while various family members recited Bible passages in a formal, prosecutorial tone, directed by a brutal, controlling father. The purpose was not moral guidance; it was humiliation and intimidation.

The teenager was, in fact, actively involved in altruistic leadership at a church. But the family accused her of hypocrisy and of being a “false disciple,” citing passages such as Matthew 7:21–23 and Matthew 23:13–20, and repeatedly telling her, “God has abandoned you,” alongside threats that she would go to hell. 

Then the family would pivot to the Old Testament, including Deuteronomy 21:18–21, which describes a “stubborn and rebellious child” being stoned to death by the community.
Because she was religious herself, this experience was not merely frightening; it was  torturous—permanently traumatizing—especially in combination with the family’s other abuse and neglect.

These episodes were interspersed with the family’s evangelical outreach efforts in the community, “to spread the word.” As is often the case, the parents were seen as pious and respectable by others. Of course, abusive behavior has complex causes, and in the absence of religion these parents might have weaponized something else. But in this family, the abuse worsened as religious involvement intensified. Congregants who were aware of what was happening were horrified, but they did little to intervene beyond offering prayer.

In another example, children of a very religious mother experienced profound daily neglect and emotional abuse for years. Once again, members of the religious community did little to change the situation other than pray. When one of these children later lived in a different environment with the other non-religious parent, her quality of life improved dramatically. She grew into an intelligent, kind, outstanding young woman—though she still carries post-traumatic symptoms from that earlier phase of life.

In another, a family had previously been happy and well-integrated with the extended family, but as they became more involved in extreme fundamentalist religion, their personalities seemed to change. They became dark, angry, and suspicious, eventually estranging themselves from the rest of the family. Threatening posters appeared on their property with scriptural warnings about hell. Attempts to reach out with kindness were met with scolding condemnations about religious differences. A particular low point was an angry, rambling religious rant delivered during the funeral service of a family elder. These changes tracked with the family becoming more insular and more committed to extreme beliefs and practices. To this day, I feel for the children who had to grow up in that environment.

I have seen numerous examples of estrangement: religious parents ostracizing, shaming, or shunning children over lifestyle or belief differences—sometimes with these actions encouraged and applauded by the religious community. In other cases, religious adults shunned their aging parents, depriving them of access to grandchildren, again with some pious explanation. As always, there are contributing factors beyond religiosity—personality traits, trauma histories, rigid family systems—but it is hard to deny that dogmatic belief, combined with community endorsement, can make these problems deeper and more entrenched.

One phrase I have heard from abusive religious parents is: “turn or burn.” I find this a concise epitome of a belief that often lurks in the background: if you don’t follow my belief, you deserve to be tortured forever. It is offered as an “invitation,” but it functions as a threat. It may even be well‑meant in some warped way, yet it violates the moral foundations the religion claims to represent. Surely, if a way of life is divinely inspired, it should be compelling because it is beautiful and ethically coherent—not because it terrifies people into compliance.

It can be clarifying to hear accounts from people who have escaped abusive religious communities. Megan Phelps‑Roper is one example. One of her most useful insights is not a clever argument against dogma, but a relational one: what helped her most was sustained contact with outsiders who treated her with compassion and respect—people who were willing to build a human connection before trying to debate her beliefs.

The Psychology of Religion, Chapter 19: Object Relations

Humans have a far more richly developed capacity for imagination than other animals. We can carry internalized representations of important relationships inside the mind. In a loose way, this resembles having an “imaginary friend,” but the point is not childish fantasy—it is a normal developmental achievement: the capacity to hold another person in mind when they are not physically present. This is one of the foundations of object relations theory, one of the more insightful and useful branches of psychoanalysis.

Developmentally, we are initially comforted by a literal parent. Over time, we can also carry in memory an internalized representation of the parent—something like an inner sense of their presence, values, and voice—which can be comforting and stabilizing even when we are alone. This helps us develop confidence and emotional continuity, and it helps us cope with separation and, eventually, grief if a loved one dies.

For many people, religious life includes an internalized relationship with an idealized figure they call God. In much Western Christian imagery (and often in people’s mental pictures), this figure is imagined in human form—often as a bearded man, sometimes portrayed as white—despite the Middle Eastern Biblical setting of the “Holy Land” and the diversity of human appearance worldwide. Many people experience this internal figure as gentle, kind, fatherly, all-knowing, loving, wise, consistent, coach-like, or even therapist-like. Others internalize a divine figure who feels stern or frightening, poised to punish wrongdoing. Often these images reflect what people have learned to associate with authority, safety, and love in their own families and communities—whether authority is experienced as warm and reassuring, or strict and punitive.

Just like relationships with living humans, people can become fiercely loyal to these internal relationship figures—sometimes to extremes, including willingness to suffer or die in service of what they experience as sacred. And because this relationship is experienced as profoundly real, it is unsurprising that many believers feel anger or grief when someone frames it as “imaginary,” or as an internal construct rather than an external reality.

Many traditions also include a personified concept of ultimate evil—often described in devil-like terms. Psychologically, this can make moral struggle more vivid and narratively coherent: it reframes temptation, cruelty, or regretful behavior as a battle against an external force rather than as a confrontation with one’s own capacity for harm. In a tight-knit community, shared belief in external evil can sometimes make reintegration easier: if wrongdoing can be attributed to “the Devil” rather than to the person’s character, the community may find it easier to forgive—especially if a ritual of repentance, prayer, or “deliverance” has been performed. But there is a downside as well: externalizing evil can blunt accountability, and it can also encourage projection—seeing “the Devil” in outsiders, dissenters, or scapegoats—fueling fear, prejudice, or moral panic.

Thursday, February 26, 2026

The Psychology of Religion, Chapter 18: Prayer

Prayer may mean different things to different people. For many, it is a meditative act: a type of philosophical reflection with existential themes, a kind of relaxation therapy, a “grounding” moment. The praying person may believe they are having a conversation with God. The manner in which God is understood to speak back is often taken in a broad, figurative way—for example, if the person subsequently has a new idea, an inclination, a redoubling of confidence, or a wave of emotion that feels like guidance. Other people may not expect that God will “speak back” at all; they may be content simply to vent, confess, grieve, or reflect within a reverent framework. In some ways this resembles classical psychoanalysis: the listener is largely silent, and the act of speaking—slowly, honestly, repeatedly—becomes the mechanism.

For many people, prayer is simply reflective or meditative: a grounding moment, a way to name fears and hopes, a way to feel less alone. But many people also pray for things—for an outcome to change, for an illness to heal, for a surgery to go well, for a war to end, for a relationship to mend. That kind of prayer is different. If it is literally effective, it would mean that events in the physical world are being altered—something in the normal chain of causation is being nudged off course. And if this were happening in a consistent, repeatable way, you would expect to see clear clusters of unusually good outcomes in places where people pray the most, or where the “right” kind of prayer is supposedly most common. You would expect the world to look, especially in more religious areas, as though the ordinary rules of physics are being bent on request. I am not aware of any such pattern.

When researchers have tried to test this carefully—especially with “praying for someone else” (intercessory prayer)—the results have not produced a solid, repeatable signal. A well-known example is the STEP trial in cardiac bypass patients: people were randomized to receive or not receive intercessory prayer, and another group was told with certainty that they were being prayed for. Overall, prayer did not reduce medical complications. Interestingly, the group who knew they were being prayed for actually did a bit worse: complications were reported in 59% of those certain they were receiving prayer versus 52% in a comparison group. One plausible explanation is psychological: once a person is told “people are praying for you,” it can quietly raise the pressure. What if I don’t get better? What does that mean about me? About God? About my faith? For someone already frightened and vulnerable, that extra layer—expectation, scrutiny, the sense that a spiritual “test” is underway—can add stress rather than comfort.

The moral structure of prayer often mirrors the moral structure of empathy. Many people’s prayers are genuinely compassionate: they think of struggling friends or family members, or of terrible world events, and they ask for comfort, protection, and healing. But if prayer is believed to cause divine comfort to arrive, this raises an uncomfortable counterfactual: if the prayer had not occurred, would comfort have been withheld? And shouldn’t a loving deity comfort suffering people regardless of whether someone happens to pray for them—especially since some of the worst suffering on earth occurs in isolation, unnoticed, with no one else even aware enough to pray? It suggests a troubling arrangement where God’s help isn’t based on who is suffering the most, but on who is lucky enough to be noticed.

This is also where it helps to remember Paul Bloom’s critique of empathy (see my review of his book, Against Empathy). Empathy is often biased and therefore unjust: it is pulled toward people who resemble us, toward vivid stories, toward those whose suffering is emotionally dramatic, while neglecting the quiet, the distant, the stigmatized, and the statistically larger tragedies that do not come with a single tear-streaked face. Prayer often inherits this same distortion. We pray intensely for the salient and familiar, and far less for abstract fairness, or for the invisible victims who never make it into our attention.

Many prayers are not about others at all; they are about wishing something for oneself. There are battlefield prayers. Prayers before a medical procedure. Prayers for money, for a job, for the return of an ex-partner, for relief from chronic pain, for the outcome of a baseball pitch or a hockey game. As a meditative act, this is deeply understandable. But psychologically it can set up a reinforcement loop: if the prayer is followed by a good outcome, the person will naturally feel it “worked,” and will be bolstered to pray again. If the outcome is bad, the person may conclude they didn’t pray sincerely enough, or long enough, or correctly enough—or that God was busy, or displeased, or testing them. Either way, the practice becomes insulated from disconfirmation.

This helps explain why prayer works psychologically, even if the supernatural claims aren't true. As a form of meditation or reflection, it can be calming and help organize our thoughts. But as a way to change the laws of physics or alter the course of events, it functions as a self-reinforcing loop. When a prayer is followed by a desired outcome, it is taken as proof of God’s power. When it isn't, the failure is easily explained away—either God said 'no,' or we didn't pray with enough faith. This dynamic validates the belief system regardless of the result, but it places an immense burden on the believer—creating the stressful illusion that their personal spiritual effort is the decisive factor in changing reality.

The Psychology of Religion, Chapter 17: Shepherding

A related religious metaphor is shepherding. Jesus is called the “Good Shepherd,” and there are many other biblical passages that liken God to a shepherd. It is a beautiful image, and as a child I absorbed it in exactly that spirit: kindly pastoral artwork, a gentle man with a hooked staff, sunny hills, a flock of woolly friends, perhaps one little sheep who has wandered off and needs to be carried back to safety.

But it is worth pausing to remember what shepherding actually meant in that time and place. Sheep were not kept as pets. They were livestock: valued for wool and milk, yes, but also raised for meat—and sometimes for sacrifice. Sacrifice would involve securing the animal using iron rings in front of an altar, cutting the animal's throat, collecting its blood in a special container, the blood then splashed against the altar; next, the animal would be hung from a hook, skinned, then various organs would be removed and burned.  

A shepherd’s role was not only protection and guidance; it also involved ownership, control, and (eventually) decisions about which animals would be killed, sacrificed, or eaten. In that light, “being shepherded” contains an unsettling double meaning: you are kept from straying, guarded from wolves, and held within the safety of the flock—but you are also being managed toward ends that are not your own.

And if we push the image just one step closer to lived reality, it gets darker in a way the children’s illustrations never hinted at. Imagine being a sheep in the flock: every so often the younger males—your cousins, in a sense—are taken away. Perhaps they are led toward a little shed at the edge of the field, or down a path behind a stand of trees, and they are simply never seen again. The flock goes on grazing. The shepherd is still “protecting” the flock. But the protection is inseparable from a system in which some members are quietly designated for disappearance.

To be fair, the Christian image in particular tries to invert the usual arrangement: the “Good Shepherd” is portrayed as laying down his life for the sheep. That is morally striking. Still, the metaphor does something psychologically and socially important: it trains us to admire a certain kind of relationship—one in which docility is a virtue, “straying” is a moral failure, and the authority to define what counts as straying belongs to the shepherd.

The phrase “sheep gone astray” appears repeatedly in scripture, usually as a metaphor for human misbehavior. But actual sheep that never “go astray” do not graduate into freedom; they remain in the flock under management. As a child I never thought of this. Now I think the metaphor is revealing, not because it proves anything on its own, but because it quietly captures an entire moral posture: safety in exchange for surrender—comfort in exchange for obedience.

Wednesday, February 25, 2026

The Psychology of Religion, Chapter 16: Sacrifice

Most religions have some form of sacrifice alluded to in their theology. Sometimes this involves literal offerings—killing and burning animals, or destroying valuable objects. Other times it is “bloodless”: giving money, time, obedience, or the renunciation of pleasures through fasting, abstinence, or celibacy. In all these cases, the underlying idea is similar: something costly is offered up, with the hope of securing meaning, favor, purity, forgiveness, protection, or communal belonging.

There are also sacrificial motifs that move disturbingly close to human sacrifice. In the Abrahamic traditions, for example, the willingness of Abraham/Ibrahim to sacrifice his son is presented as a peak test of obedience—and in Islam it is commemorated annually in Eid al-Adha, the “Festival of Sacrifice,” in which animal sacrifice functions as a memorial of that story.  And in Christianity, the theme of sacrifice is carried into the central story of Jesus: a dramatic moral and symbolic reframing of sacrifice into self-sacrifice, offered “for others.”

Sacrifice is, in my view, an extension of ordinary human ideas about reciprocity and gratitude—infused with magical thinking. In a community we do favors, give gifts, and care for one another. These behaviors can be altruistic, but they are also supported by norms of reciprocity. If one believes that a mystical power controls destiny, fertility, weather, health, wealth, or military success, it becomes psychologically “reasonable,” within that worldview, to give that power a gift—hoping for a return.

And once a person enters this mindset, the logic can become self-sealing. If you make sacrifices and misfortune still comes, you can conclude the offering wasn’t sufficient, wasn’t sincere enough, or wasn’t given with the right purity of heart—so you must increase it next time. If something good happens afterward, it feels like proof that the sacrifice worked, and should be repeated.  In this way, practicing sacrifice can become an escalating brutal and destructive behaviour.  The sacrificed animals—often the most vulnerable and least able to “consent” to the human story being told about them—do not get much say in the matter.  

Another motivation for sacrificial rituals likely came from the brutal necessities of ancient life: hunting animals, or killing domestic animals for food. Most humans bond to animals easily, and it would be psychologically troubling to watch an animal struggle and suffer. Ritual can function as moral anesthetic: a way to consecrate violence, to assuage guilt, and to turn a grim necessity into a story of gratitude, order, and meaning.

Sacrifice can also be political performance. Public ritual can consolidate hierarchy (especially priestly hierarchy), display power, intensify fear, and signal unity. It is not hard to see how sacrifice functions as a kind of social technology: it makes shared belief visible and costly.

This is also where sacrifice connects to group psychology. Some scholars have argued that costly rituals—things you would not do unless you were committed—operate as signals that strengthen trust and cooperation within a group, partly by filtering out free riders. A community bound together by shared sacrifice can feel safer, warmer, and more morally serious to its members. But that same mechanism can harden boundaries and intensify suspicion of outsiders.

Speaking of reciprocity: it is a strongly selected trait to favor and help genetic relatives, sometimes even in self-sacrificial ways. If there is a person who has a trait that causes them to selectively help close relatives, then that trait will tend to persist in the family line, because it helps protect and propagate the very genetic network that carries it forward. This is a simple evolutionary logic: kin altruism increases the survival and reproductive success of the shared family “pool,” even when it costs the individual something in the short run.

But humans do not walk around calculating degrees of genetic relatedness. Instead, we rely on crude, fast heuristics — cues that, over most of human history, were often correlated with kinship and shared ancestry. People who live near each other, marry each other, and raise children together will, over generations, tend to share not only genes but also language, accent, customs, dress, and social norms. Conversely, people who look different, speak differently, or practice very different customs are often from a different village, tribe, or family network — and therefore are more likely to be less closely genetically related than the people who share your immediate cultural and familial world.

The mind has evolved to be slightly more generous, trusting, and self-sacrificing toward those who are more likely to be “one of us,” so it follows that it may also be less generous, more suspicious, or more emotionally distant toward those who feel like “not us.” These tendencies are not destiny, and they are not moral justification — but they are part of the psychological and evolutionary foundation of prejudice. These are precisely the sorts of inherited inclinations we must learn to recognize, challenge, and actively override.

The Psychology of Religion, Chapter 15: Spirituality

Humans have cognitive tendencies that make superstitious beliefs easy to generate—and hard to extinguish. Beliefs in spirits, ghosts, magic, luck, or fate guided by mysterious forces are so widespread across cultures that they are difficult to avoid noticing. The surface content varies wildly from place to place—local spirits, protective rituals, sacred objects, invisible dangers—but the underlying psychological grammar is familiar.

A core ingredient is pattern-seeking. The mind craves meaning, and when the world is uncertain or painful it will often manufacture meaning rather than tolerate ambiguity. This is not stupidity; it is ordinary cognition under stress. When people feel a loss of control, they become more likely to perceive patterns—even illusory ones—in the environment, and to treat coincidence as signal. Superstition can be emotionally satisfying precisely because it converts randomness into a story.

Stories, dreams, unusual experiences, and compelling anecdotes can then become socially transmissible. Once a few people begin to interpret events through a “hidden forces” framework, the framework spreads: it gives language to fear and hope, it creates a sense of specialness, and it offers the pleasure of explanatory closure. Coincidences become “signs.” Ambiguous perceptions become “messages.” A confusing life becomes a legible plot.

From a psychiatric point of view, there is also genuine individual variation in proneness to unusual, mystical, or numinous experience. Some people reliably feel awe, presence, synchronicity, and “spiritual certainty,” while others rarely do. This is shaped by personality and temperament, by culture and reinforcement, and by biology. One useful but imperfect metaphor is that some minds run with higher “gain”: experience arrives vivid and compelling, but with a greater risk that noise is interpreted as signal. Salience systems in the brain—dopamine is one relevant piece of that puzzle—are part of how humans decide what feels meaningful, and research on paranormal belief repeatedly circles around the study of such neurotransmitter systems.  

Many members of organized religions disparage “superstition” or free-floating “spirituality.” Yet in psychological terms, the differences are often of degree rather than kind. Organized religions tend to formalize these human tendencies into institutions: they standardize the stories, professionalize the interpreters, and link belief to group identity and obligation. “Spirituality,” in contrast, often keeps the intuitions while loosening the institutional grip. But both draw on the same human appetite for meaning, comfort, and narrative.

The Psychology of Religion, Chapter 14: Religion as a Business

Many religions and other spiritual practices operate partly like a business. There is marketing (proselytization, outreach), branding (symbols people wear on clothing or on necklaces), encouragement to be loyal to your brand, and criticism of other brands. But then there is also a financial commitment, leading to an organized financial structure. There is work to be done by members of this structure, with an ultimate goal—explicit or implicit—of retaining and expanding membership, eliciting volunteering efforts and financial contributions, and maintaining morale.

With some intensely tribal, high-commitment groups (fraternities are the obvious benign example, gangs the darker one), there can be an onerous initiation ritual. Social psychologists have shown that when people have to work hard, endure discomfort, or pay a steep price to join, they often become more loyal afterward—partly because the mind naturally tries to justify what it has sacrificed. Religions also commonly have initiation processes: potential members may be vetted, attend educational sessions, and then take part in some public ritual in which solemn commitments are made.

Sometimes, as with luxury business models, broad proselytization does not occur; instead, the “product” is restricted. Only a select few gain entry. In some traditions you need advanced membership—often taking years—before you are allowed to enter certain beautiful buildings such as temples, or partake in certain deep rituals. Sometimes only men are allowed into certain leadership roles or ritual spaces. These obstacles increase the allure and tend to attract people willing to contribute more commitment, time, and money. If everybody had a Rolex watch or a Gucci bag, it would cease to be as special; exclusivity is part of what makes the object feel “high-end.”

One particular feature of religion that resembles a corporate tactic is the elevation of belief alone—faith—as a key virtue. Belief without evidence is not merely tolerated; it is often praised. If a corporation could successfully propagate that idea, it would be extremely useful for marketing, since people would form loyalty to the brand without looking too closely at “reviews.” Doubt could be reframed as weakness, betrayal, or impurity. Meanwhile, “true believers” are rewarded: their status, trust, and esteem in the community rises in proportion to their loyalty.

In many cases religious institutions amass vast wealth: in property, buildings, and investments. In at least some prominent modern examples, credible reporting and public filings have described religious investment holdings on the order of tens of billions of dollars, with wider claims in some cases exceeding $100 billion—figures that are difficult to reconcile with the ordinary believer’s image of humble spiritual stewardship. And these structures often operate with significant tax advantages. In the United States, churches are generally treated as tax-exempt. In Canada, registered charities (including many religious organizations) are exempt from paying income tax while registered. 

And yet, some of the most insightful cautions about wealth come from within religion itself. One of the sharpest is the line attributed to Jesus (present in all three Synoptic Gospels): “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”

The Psychology of Religion, Chapter 13: to be a scholar, you had to study theology

Over history, many wise people who wanted to use their intelligence and talents to learn about existential, philosophical, or scientific topics—while also guiding or helping their communities—ended up studying religion. In many eras, the church (or more broadly, religious institutions) offered one of the most stable pathways to literacy, scholarship, social leadership, and public voice.

This created an important structural bias: if you wanted to become a scholar, you often had to pass through theology. It wasn’t simply that “smart people liked theology.” Rather, theology was pushed into the intellectual mainstream partly because it controlled the educational pipeline. If you wanted books, training, mentorship, libraries, credentials, or a platform to teach, preach, or write, you frequently had to operate inside an institution whose organizing framework was theological. In that setting, it is not surprising that many great intellectual leaders, creative people, and moral voices either embraced religious thinking sincerely, or at least spoke its language fluently—because that was the price of entry into the scholarly world.

Of course, this does not mean that the underlying doctrines were necessarily correct. It means that there was a selection effect: theology and scholarship were tightly coupled, so theology gained prestige from its association with learning. When a brilliant, educated, altruistic person was also a theologian (or clergy), observers could easily slide into a mistaken inference: their intelligence validates their religious beliefs. But the more cautious interpretation is that intelligent, thoughtful people are compelling—and often good for society—even when they hold unfounded beliefs.

There is another aspect to this bias that matters just as much: in many historical settings, if you were a great scholar and you wanted to challenge the theological framework publicly, you didn’t simply risk social disagreement—you risked losing the very venue that made scholarship possible. Sometimes the penalty was professional exile; sometimes it was censorship; sometimes it escalated to legal punishment or execution. This is not a minor footnote. It means that the historical record is not a neutral marketplace of ideas: the “intellectual mainstream” had guardrails enforced by religious authority.

Galileo is one of the clearest examples. His work was not a vague “anti-religious attitude”; it was substantive science: telescopic astronomy and arguments for the Copernican model, expressed publicly and persuasively in works such as the Dialogue Concerning the Two Chief World Systems. He was tried in 1633 and spent the rest of his life under house arrest. Whatever one thinks of the surrounding politics, the lesson for my argument is straightforward: a scholar’s survival, platform, and legitimacy could depend on staying within theological limits.

Giordano Bruno illustrates a slightly different version of the same phenomenon. Bruno was a philosopher with cosmological ideas—sympathetic to Copernican thought and willing to push toward an image of an infinite universe and a plurality of worlds—entangled with theological claims that authorities judged heretical. He was executed in Rome in 1600. The point is not to turn him into a simplistic martyr for “science.” The point is that the intellectual venue was governed by religious boundaries, and crossing those boundaries could be fatal.

And this pressure was not limited to cosmology. Reformers, translators, and dissidents—people working on questions of authority, conscience, textual interpretation, and the right to think aloud—were also punished severely in various times and places. (Even when their “project” was not a new scientific discovery, the underlying conflict was similar: who gets to define truth, and what happens to you if you disagree.)

To be fair, religious institutions also preserved and transmitted learning in many eras; the story is complicated. But it is precisely because the story is complicated that the bias matters: when one institution is both a guardian of education and an enforcer of "correct thinking," you will inevitably see a historical overrepresentation of scholars who were theologically fluent, and an underrepresentation of scholars who were openly theologically defiant. In some periods, the risk was not just social—it was life and death.  

There is even a modern echo of this. Today, most academic science operates in secular institutions with strong norms of open debate. Yet in some settings—religious universities, seminaries, ideologically bound communities, or authoritarian political climates—career paths and social legitimacy can still be contingent on affirming a doctrinal framework. Even when this is not enforced by law, it can be enforced by social sanctions: loss of community, loss of employment, loss of status, loss of belonging. The mechanism is the same as in earlier centuries, but usually softer: a belief system becomes part of the admission ticket to a valued intellectual or social world. And once again, this can create the misleading appearance that “the best minds endorse the doctrine,” when part of what is happening is that dissenters self-select out—or are pushed out.

So the historical association between theology and scholarship has often been non-causal. It is partly an artifact of institutional history: for long stretches of time, if you wanted to be educated, you had to study religion; and if you wanted to remain educated publicly, you often had to respect religion’s boundaries. That fact alone can make the intellectual prestige of religion look stronger than the evidence for its literal claims actually warrants.

The Psychology of Religion, Chapter 12: Benefits of Unfounded Belief

Another interesting—but potentially troubling—angle on all of this is that people can sometimes latch onto a belief system whose claims are plainly unfounded, and yet experience real improvements that they had not found otherwise. Those improvements can entrench the belief further, because the person now has lived evidence: “It worked for me.” The mechanism is similar to the self-deception-mediated dynamic Trivers describes, and similar to the way psychoanalysis could help people even when much of its causal theory was exaggerated or wrong.

For example, some people latch onto an extremely rigorous or bizarre diet with a completely spurious rationale, and yet end up stabilizing a prior eating problem or obesity problem. Often what makes the diet “work” is not the theory, but the frame: the diet becomes a totalizing structure, a rule-set, a ritual, a commitment device. Strong belief in the diet’s narrative can increase adherence—sometimes dramatically—especially when the belief is reinforced by “spiritual” practices, authoritative texts, a charismatic leader, and enthusiastic support from fellow adherents. The resulting improvement may have little to do with the supposed mechanism (“toxins,” “energy,” “inflammation caused by impurity,” or whatever the myth is) and much to do with ordinary behavioral ingredients: attention to quantities and timing, reduced ultra-processed foods, fewer calories, more routine, more social accountability, and sometimes more exercise. In other words, the distinguishing features of the theory may be fictional, while the behavior change is real.

It is tempting to treat these forays into false belief as harmless whenever they produce visible gains. But there is a dark side. Some dietary regimens are medically dangerous; some aggravate eating disorders; and many cultivate a loyalty to the framework that discourages critical thinking. When a person’s identity becomes fused with a belief system, they may reject better treatments when those treatments are indicated—especially if a setback is interpreted as evidence of insufficient “faith,” insufficient purity, or insufficient devotion. In addition, these frameworks often come packaged with community. People who join one cluster of unusual health beliefs can be pulled, by social gravity, into neighboring clusters: new spiritual doctrines, political identities, conspiratorial worldviews, and the subtle expectation of financial contribution—paid coaching, proprietary supplements, retreats, memberships. The incentives are often misaligned.  

And there is another distortion worth naming: we mostly hear from the success stories. The people for whom the diet failed, harmed them, or simply became an expensive obsession rarely become public evangelists. The community’s narrative therefore becomes skewed toward “miracles,” while the quiet attrition and collateral damage remains invisible.

Finally, just as in religions, the next step is often proselytizing. People who believe they have found salvation—whether dietary, medical, or spiritual—tend to recruit. They may pressure friends and family to “convert,” and disparage outsiders as ignorant or closed-minded. In the context of fad diets and alternative medicine, that can do real harm to public health.

So the point is not that false belief never “helps.” The point is that when it helps, it often does so through common human mechanisms—structure, community, meaning, identity, accountability—while smuggling in risks that are easy to deny and hard to reverse once the belief becomes an emblem of belonging.

The Psychology of Religion, Chapter 11: Evolution

The following is a lengthy discussion of evolutionary science. I think it deserves space here because it deals with the origins and diversification of life in a way that directly contradicts literal religious or mythological accounts of creation. It is certainly possible to remain religious while accepting evolution—many people do—but for some believers, evolutionary biology feels like an unacceptable affront to faith, because it replaces a story of intentional design with a story of natural processes unfolding over vast time.

There are experts in evolutionary science who can explain this far better than I can. Still, I want to set aside space for it in my own voice, because (1) the basic logic is not hard to understand, (2) the evidence is overwhelming, and (3) the emotional resistance to it often has very little to do with evidence, and a lot to do with identity, belonging, and sacred narratives—topics I’ve already been discussing.

What follows is a tour through the core mechanism (natural selection), a few common misunderstandings, the idea of speciation, and then a few “side corridors” that matter for this essay: cultural (memetic) evolution, sexual selection, and the uncomfortable fact that even religiosity itself is partly shaped by temperament and biology, not only by culture.
Natural Selection

Natural selection is the central guiding principle of evolutionary theory. The logic is profoundly simple. It requires only that we accept three basic facts:

Organisms vary (physically, physiologically, behaviorally).

Some variation is heritable (traits are influenced by DNA, even though environment matters enormously too).

Some traits affect reproductive success—not in a morally loaded sense of “deserving,” but in the literal sense that some variants leave more surviving offspring than others.

If a heritable trait increases the probability of leaving more surviving offspring in a particular environment, then—over generations—the population will contain more of that trait. If a trait reduces reproductive success, it tends to diminish. That’s it: differential reproduction plus heritable variation, iterated over time.

You can see the basic logic everywhere, from selective breeding in crops and animals, to antibiotic resistance in microbes, to the obvious family resemblance in both physical and psychological traits among human relatives. None of this requires the belief that genes are the only cause of traits; it requires only the admission that heredity is a major contributor.

A point worth emphasizing (because it is often misunderstood): natural selection is not about "improvement” in any moral or progressive sense. It is simply a filter that favors whatever works well enough in a local environment at a given time.

DNA replication is extraordinarily accurate, but not perfectly so. Across generations, there are small changes—mutations—introduced into genetic material. “Mutation” here does not mean “bad.” It means “change.” Most mutations are neutral; some are harmful; a few are beneficial in a given environment. In addition to mutation, sexual reproduction creates variation through recombination—shuffling existing variants into new combinations.

A crucial clarification: the useful statement in evolutionary theory is not that mutations are “random” in some metaphysical sense, but that they are not produced because they would be useful. In other words, variation arises without foresight; selection is the non-random sieve that preserves what works.

When small genetic changes happen to produce a trait that improves survival or reproduction in a particular environment—say, a slightly different beak shape that accesses food more efficiently—those variants can become more common. Over many generations, accumulation of such changes can produce substantial transformations, including changes in complex organs and behaviors.

Darwin’s finches in the Galápagos remain a famous entry point into this idea: different ecological niches favor different beak shapes. The underlying point generalizes: ecological pressures shape populations.

One reason evolution feels counterintuitive is that large organisms reproduce slowly relative to a human life. Big evolutionary changes can take thousands or millions of years, just as major geological or astronomical processes do. We don’t “watch” a canyon form or a star evolve within a single afternoon, but the evidence for those processes is still decisive. Evolution is similar: you infer long processes from converging lines of evidence.

Fossils are one line of evidence—imperfect, but immensely powerful. Fossilization is rare and biased toward certain environments and tissues, so the record will always be incomplete. Still, the overall pattern—order in time, transitions, branching diversification—fits evolutionary predictions. Many intermediate forms for major transitions are known, and “gaps” often shrink as new discoveries accumulate.

And importantly: many “intermediate” forms are living, not fossil—species that preserve traits that help us understand evolutionary branching, even though they are not literally our ancestors.

A cliché objection goes like this: “Evolution says humans descended from chimpanzees.” That is not what evolutionary biology claims. The evolutionary picture is a branching family tree. Humans and chimpanzees are not ancestor and descendant; they are evolutionary cousins who share a common ancestor in the deep past.

The same logic generalizes outward. All living creatures on Earth are related in a vast genealogical sense: a branching history of common ancestry over deep time. For many people, this is not depressing at all—it is a source of awe, a kind of cosmic kinship.

One of the most beautiful things about modern evolutionary science is that it does not rely on any single line of evidence. Comparative anatomy, fossils, biogeography, embryology, and genetics all converge on the same branching structure.

At the genetic level, you can compare DNA or protein sequences across species. The patterns of similarity and difference allow reconstruction of phylogenetic trees that match what we see in fossils and anatomy. You can also use calibrated rates of genetic change (with many caveats and error bars) to estimate when lineages diverged. The key point for this essay is not the exact date of every branching event—it’s the scale: the story is unimaginably older than the few-thousand-year timeframes implied by literalist readings of sacred texts.

Over the short term, evolution often looks like shifting trait frequencies within a species. Over the long term, divergence can accumulate until populations become reproductively isolated—meaning they can no longer interbreed successfully under natural conditions. That is speciation.

Speciation isn’t always a sharp on/off switch. In nature it often behaves like a continuum, with partial compatibility, hybrid zones, and gradual divergence. A famous teaching example is the Ensatina salamander complex, often discussed as a “ring species”: neighboring populations can interbreed around a geographic region, while populations at the far ends of the chain have diverged enough that interbreeding breaks down. This is one of those cases that makes the underlying idea vivid: species boundaries can be the end-point of gradual divergence, not a magical dividing line.

One thing I want to state explicitly (because people often get confused here): even if a behavior or trait has evolved, that does not mean it is morally “right,” or that we should accept it. Evolution describes how traits spread; it does not tell us what we ought to value.

Evolution also does not produce “perfect design.” It modifies what already exists. That is why we see compromises and oddities in anatomy—structures that work well enough, but are constrained by history. Humans are full of such examples. The moral point for me is simple: if we want to become more humane, we have to build culture—rules, norms, education, and institutions—that restrain some evolved tendencies and cultivate better ones.

A parallel kind of evolutionary logic shows up in culture. Dawkins coined the term “meme” to describe cultural units that replicate—ideas, phrases, rituals, styles—that spread, vary, and undergo selection in minds and communities.

Language evolution is a good example. Languages branch and drift. Over time, groups can become mutually unintelligible. You can even reconstruct “family trees” of languages and infer common ancestors like Proto‑Indo‑European, spoken roughly 6,000 years ago, which eventually diverged into tongues as distinct as Hindi and Norwegian. Even among close relatives, the drift is measurable: English and German began to diverge significantly around the 5th century CE with the Anglo-Saxon migration to Britain, while Dutch and High German separated later, roughly between the 6th and 8th centuries, driven by the High German consonant shift. These are not static categories but snapshots of a flowing river; what was once a dialect becomes a distinct language--analogous to a biological species--given enough time and separation.

Ironically, religions also behave this way. Doctrines split. Schisms occur. New denominations form. We see this clearly in Christianity: the Great Schism of 1054 formally severed the Eastern Orthodox and Roman Catholic churches; later, the Reformation of 1517 splintered Western Christianity into Catholic and Protestant branches. That Protestant branch fractured further into Lutherans, Calvinists, and Anabaptists, and continued splitting into modern groups like Methodists and Pentecostals. This fragmentation continues today with astonishing speed. In just the last few years, the United Methodist Church has undergone a massive rupture over social doctrine, while evangelical congregations frequently splinter over narrower debates—ranging from the role of women in leadership to specific interpretations of theology—creating new networks almost overnight. At times, this drift is total. Newly formed divergent groups can become so distinct as to have no further contact or agreement with each other, a process analogous to the formation of distinct species in the biological world. This pattern is universal: Islam experienced its primary rift between Sunni and Shia almost immediately after the death of the Prophet Muhammad in 632 CE, and Buddhism diverged into Theravada and Mahayana schools roughly around the 1st century BCE. The “family tree” metaphor is not perfect, but it is illuminating: religions do not descend from heaven fully formed; they evolve within history.

And this matters psychologically: people sometimes treat their local, historically contingent version of a faith as if it were timeless and universal—when in reality it bears the fingerprints of cultural inheritance, conflict, politics, and geography.

Another important evolutionary idea is sexual selection: traits can spread not because they help survival directly, but because they affect mating success. The peacock’s tail is a classic case—beautiful, costly, and dangerous, yet selected for because it becomes desirable within the mating preferences of the species.

There is a long debate about what sexual ornaments “signal.” Some theories emphasize indirect signals of health and robustness. Richard Prum argues, persuasively in my view, that sexual selection can also reflect an evolving aesthetic culture—preference itself becoming a driver of biological change.

This is relevant to humans not because we are peacocks, but because it reminds us that biology is not only grim survival calculus. It includes courtship, display, aesthetic preference, and the strange feedback loops between bodies, brains, and culture.

Religiosity itself—along with the tendency toward mystical or paranormal belief—is not only cultural. Twin studies suggest that as people reach adulthood, genetic differences contribute meaningfully to individual differences in religious values and practices, while shared family environment tends to matter less than it does in childhood.  Genetic differences account for a portion of the variability in religiosity,  alongside development, culture, peer groups, and life events.

It’s also worth noting that intelligence and religiosity show a small-to-moderate negative association on average in some large literatures, particularly for more literalist or fundamentalist belief styles. This is not a claim about any individual believer (many brilliant people are religious), but about population-level tendencies and the cognitive styles that different religious environments reward or discourage.

A related trait dimension that matters here is schizotypy: a spectrum of unusual perceptual experiences, magical ideation, and pattern sensitivity. At mild levels it can correlate with creativity and a “poetic” mode of experience; at extremes it can shade into pathology. A mind more prone to vivid pattern-finding and unusual salience can be more vulnerable to interpreting internal experiences as external revelations.

And moral psychology matters too. Some people are more temperamentally drawn to moral foundations like loyalty, authority, and purity—dimensions that many religious communities strongly emphasize. Others prioritize harm reduction and fairness. These differences shape who feels “at home” in particular religious cultures, and who experiences them as suffocating.

I’m aware that this chapter is long. But I think the payoff is worth it: once you really absorb the logic and the evidence for evolution, it becomes hard to view literal creation myths the same way again. And yet—this is crucial—it does not require cynicism. For many people, it opens a door to a deeper, cleaner awe: a reverence for reality as it actually is, not as we once wished it to be.