Showing posts with label Religion. Show all posts
Showing posts with label Religion. Show all posts

Saturday, February 28, 2026

The Psychology of Religion, Chapter 29: Conclusion

In conclusion, religious beliefs—and organized group religion in particular—have been a part of human civilization for thousands of years. Culturally, religion can have many benefits: it can help communities come together to celebrate and to grieve, to contemplate morality, to show gratitude, and to meditate. Religious faith is consolidated by human tendencies to be loyal (to one’s ingroup, to one’s family, to longstanding beliefs learned and practiced since childhood, and to idealized figures), and by the human tendency to internalize (for many believers, God functioning as an internalized representation of perfect  goodness, power, or protection). Religions are further consolidated by many enjoyable and meaningful human cultural activities: a lot of the world’s greatest art, music, literature, and architecture is rooted in religion. Religions also help many people cope with the deepest, most painful, and most frightening experiences of life, such as facing the deaths of our loved ones, or facing one’s own mortality. And religious services can be a medium through which people meet friends or potential partners, sometimes with a better-than-average chance of meeting someone with whom they might share values, also with that person vetted by the church community.  The congregation itself can act as a village matchmaker.  

Yet religions and other spiritual or mystical systems hold beliefs that are not true. These beliefs are often taken literally, and dogmatic adherence to them—public profession of them, loyalty to them—is frequently required as a sign of belonging. Some of these fictions may be inconsequential much of the time; many people can live decent lives without a precise understanding of biology, astronomy, geology, genetics, or ancient history. But the darker side has to do with the extremity of group loyalty: ingroups and outgroups form, and religion becomes an emblem of identity that can seed mistrust, exclusion, and maltreatment of outsiders. Dogmatic pronouncements can also become oppressive to the group’s own members, particularly when people are pressured into literalistic interpretations of sacred texts, or when “faith” becomes a moral duty rather than an honest way of grappling with uncertainty.  Furthermore, spiritual or mystical beliefs about causality can lead to dangerously poor judgment about important life decisions, yet the spiritually guided person can feel supremely confident while making these decisions.  

The lack of accurate education about the way the world works is finally detrimental to any individual, group, or nation. It is like a pilot of an airplane who doesn’t understand how the engines work, and assumes that planes fly due to magic. Most of the time this may not seem to make much difference to the safety and navigation of the plane—until the weather changes, until something unexpected happens, until you need a sober understanding of what is real in order to respond well. A culture can coast for a long time on comforting stories. It is when conditions become difficult that false models show their true cost.

I think it is valuable that we live lives in which we strive toward understanding deep truths—about ourselves and about the world—and it is just not satisfying to settle for fictional belief, even if these fictions might comfort us. It is particularly troubling to me for children to be indoctrinated with dogmatic beliefs, especially if they are not exposed to accurate information about the world in terms of science, history, and culture. And it is troubling that there should be public financial support for religious groups, in the form of tax breaks and other privileges, unless these are clearly restricted to the charitable components of religious outreach rather than the promotion of dogma or political influence.

We certainly know that holding religious belief is not necessary to be a moral, kind, loving, gentle, humble person. In fact, in some cases religious beliefs can obstruct these positive qualities and add to the world’s problems. And it is possible to face the most difficult aspects of human life—grief, loss, pain, and death—while behaving honourably, peacefully, and nobly, without requiring belief in some eternal reward. In fact, moral behaviour done for its intrinsic good, rather than being motivated by fear of punishment or hunger for reward, seems to me a deeper ethical foundation. Such a stance does not require religion, but it does require effort: working on living well, striving to become a better person, and trying to be a stabilizing and humane influence on others.

In discussing religion, it is important to empathize with people who hold religious or spiritual beliefs. Respectful understanding of how and why people believe as they do matters—especially if the goal is genuine dialogue rather than tribal combat. It is also valuable to search for common ground, particularly with regard to values. Most religious people value integrity, loyalty, altruism, compassion, truthfulness, lawful behaviour, fairness, family, care of children, hard work, and the willingness to stand up for what is right even at risk to oneself. In a discussion about religious belief, it can help to emphasize these shared values, because it appeals to unity rather than escalating the feeling that one is an outgroup member disrespecting a sacred tradition.


And that brings me back to the question that started many of these reflections: how to face transience without leaning on supernatural reassurance. I sometimes think of simple things: a firework, a meal, a fire, a cup of hot tea. These things are transient; they disappear. Yet their constituents are still present—they have merely dissolved and dispersed into the surrounding space in a different form. The structure ends; the ingredients remain, rearranged. We can’t expect the cup of tea to survive unchanged forever, and we can’t expect the firework to glitter permanently. In fact, it is normal—and even required for its enjoyment—that it be transient. A lot of religion tries to deny this, or to soften it with a story about eternity. I think there is another path: to accept that things end, to grieve honestly when they end, and still to love them fiercely while they are here.

There are examples of keeping the healthiest aspects of religion—the focus on values, morality, kindness, altruism, charity, humility, meditative self-care, self-improvement and sincere amends-making, caring for and accepting care from community members, enjoying beautiful music, art, and architecture, with a focus on gratitude and reverence—while not becoming captive to narrow dogma, false beliefs about science, or denigration of outsiders. Some interfaith movements aim to cultivate peace and mutual respect across traditions. Some branches of modern religion are simply less dogmatic and more open to science and cultural pluralism. And many people, religious or not, find ways to live with moral seriousness and spiritual depth without insisting that myths must be treated as literal facts.


The Psychology of Religion, Chapter 28: Religion and Common Knowledge

One of the most useful ways to understand religion is to treat it as a way humans achieve social coordination. Steven Pinker’s recent work on common knowledge offers a sharp insight about this. “Common knowledge” is not merely that many people know something; it is the further (and crucial) fact that everyone knows that everyone knows it, and everyone knows that everyone knows that everyone knows it—an infinite regression that human beings handle with surprising ease in daily life.

This strange cognitive capacity is not just a philosophical speculation. It is a practical tool that makes civilization possible. In Pinker’s framing, common knowledge generates coordination: it lets people converge on shared conventions (driving on the right, accepting paper currency, showing up to the same meeting place at the same time) without needing a central enforcer to micromanage every choice. But the same logic also explains the “shadow side” of social life: why people avoid saying obvious things out loud, why hypocrisy can be stabilizing, why shaming mobs ignite, why revolutions seem to erupt “out of nowhere,” and why public rituals—of all kinds—have such force.

Religion is, among other things, a machine for manufacturing common knowledge. Private belief is psychologically real, but socially weak. It does not coordinate strangers. A society cannot run on invisible beliefs that no one can observe. What rituals do—prayer spoken aloud, communal singing, congregational responses, public confessions, initiation rites, sacred calendars, distinctive clothing, a crucifix necklace, shared dietary rules—is turn inner states into public signals. They convert “I believe” into “we all see that we believe,” and then into “we all know that we all see that we believe.”

This matters because people are extremely sensitive to the social risk of being the odd one out. If belonging is the reward and ostracism the punishment, the most dangerous condition is uncertainty: Do they believe this? Do they know I believe it? Do they know I’m wavering? Rituals collapse that uncertainty. They make allegiance visible. They create an emotionally saturated version of a contract—less like signing a document, more like standing under a spotlight and letting the group watch you sign with your whole body.

This is why religions place such emphasis on public acts. Private prayer is meaningful to many, but communal prayer is socially decisive. Singing alone is esthetic; singing together is social glue. An individual moral intuition is fragile; a moral intuition recited in unison becomes harder to question, because questioning it is no longer a solitary cognitive act—it is a social offense. And once a belief is entangled with common knowledge, its truthfulness often becomes secondary to its coordination-value. The belief may be fictional, but it is socially efficient.

Common knowledge is not only about shared content; it’s also about credible commitment. A cheap signal is easy to fake. A costly signal is harder to fake, which is why social groups are so attracted to cost. High-demand religions—those that require many hours of weekly participation, tithing, conspicuous behavioural restrictions, sexual policing, or public displays of devotion—often look irrational or excessive from the outside. But through the common-knowledge lens, some of this “irrationality” is exactly the point. If a group can get you to do something that is inconvenient, stigmatizing, or effortful, it has a way to distinguish true loyalists from casual tourists. The sacrifice itself becomes evidence.

This is also why initiation rituals recur across wildly different human groups, religious and otherwise. The ordeal is a signalling device: “I paid a price to be here; therefore I must value being here; therefore I am one of you.” The group sees the price, and the price creates common knowledge of commitment.

Pinker makes an additional point that is deeply relevant to religious life: people do not always want knowledge to become common knowledge. They often go to great lengths to ensure that even if everyone privately knows something, no one is forced to publicly acknowledge it. Many communities function because people collude, tacitly, in not pressing certain questions to the point of explicitness. The moment a doubt is spoken plainly, it stops being a private flicker and becomes a social event. It demands response. It forces alignment. It threatens the shared story.  

I can't help but think there are many private doubts about very alarming world events that members of large partisan groups are having in parts of the world today, but few people within these communities have the willingness--or courage--to speak their doubts out loud, since they would risk losing the support of their community.  I think in particular of a well-known world leader who is often in the news.  It reminds me of the fairy tale The Emperor's New Clothes--this story is especially apt to our modern times, since many of truths of the current world situation and of the behavioural problems of major leaders are so obvious that a very young child would be able to understand them clearly. Perhaps the innocence and humility of a child's voice is exactly what is needed to be convincing to those currently unwilling to speak the truth.  Furthermore, one definition of heroism, in my opinion, is having the willingness to speak one's private knowledge of the truth to a group that might at least initially reject you for it.  

In practice, a religious community often survives not by answering every question, but by managing which questions are ok to ask.  

Religious spectacles—miracles, exorcisms, dramatic conversions, speaking in tongues, revival meetings—are not just theological events. They are high-powered signalling events. They take a private feeling (“I felt something”) and turn it into a public fact (“We all saw her fall, shake, cry, speak strangely, rise transformed”). The group witnesses a performance that is emotionally contagious, and the witnessing itself becomes part of the evidence.

The crucial move is not that an unusual event occurs, but that everyone sees everyone seeing it. This is how common knowledge is made at high speed: a shared spectacle that forces a shared interpretation, or at least a shared posture. If you stand in the room and do not respond, you are not merely unconvinced—you are socially deviant. The power of the event is partly the power of mutual surveillance.

This is also why sceptical outsiders often have a dual reaction to certain public charismatic performances: amusement at the apparent absurdity, mixed with unease at the real influence such spectacles can have when they become fused to political power. The performance may look ridiculous, but its social function is very serious: it converts theatrical intensity into tribal certainty.

A frequent defence of religion is that it provides moral structure. That claim is not wholly wrong—at least at the level of group coordination. A community that repeats moral language weekly, that teaches children shared scripts for gratitude, restraint, charity, and self-scrutiny, will often produce decently socialized people. The group is continuously manufacturing common knowledge about what counts as admirable, shameful, or forbidden.

But common knowledge cuts both ways. It can coordinate kindness; it can also coordinate cruelty. When a group makes contempt for outsiders (such as immigrants) common knowledge—through sermons, jokes, or political messaging—the moral atmosphere shifts. People become emboldened. What was privately felt becomes publicly permitted. The difference between a prejudice that quietly lingers in someone’s mind and a prejudice that is openly shared is enormous: the second is actionable. It becomes policy. It becomes bullying. It becomes violence with a clean conscience.

We see today a rise in bullying and prejudice in part because of this common-knowledge effect--various groups are coordinating a social norm by which prejudicial thinking is shared by a community, and becomes an emblem of partisan group involvement.  

The frightening historical efficiency of religious persecution is, in part, a story about common knowledge: it is easier to harm others when the group has made the justification publicly shared, ritually repeated, and socially rewarded.

Common knowledge is not only created in sanctuaries; it spreads through networks. Here the work of Nicholas Christakis is a useful complement. His research argues that behaviours can “cascade” through social networks—spreading from person to person to person, sometimes out to several degrees of separation. Human behaviour is not merely individual choice; it is often contagious.

Religion has always understood this intuitively. Congregations are network structures: friendship graphs with rituals attached. Conversion is rarely solitary; it is more often a relational event. People move toward belief because a trusted person pulls them toward a group in which belief is common knowledge. Doubt spreads similarly: not primarily through reading an argument, but through watching someone you respect begin to question the sacred story. The moment that becomes visible, it becomes socially thinkable. It becomes “sayable.” It becomes a potential cascade.

This is one reason religious authorities, across centuries, have been so preoccupied with public dissent. Private doubt is manageable; public doubt threatens contagion.

The “New Atheist” era often tried to treat religion as if it were primarily a set of factual claims—claims that could be refuted, one by one, by geology, evolutionary biology, textual criticism, or cosmology. Those refutations matter. But they often fail to persuade for the same reason a spreadsheet rarely defeats a love affair: the object is not merely an idea; it is a social world.

If religion is partly a technology for manufacturing common knowledge—about belonging, virtue, status, and identity—then a purely evidential critique will bounce off the surface for many people. The deeper structure is social. To leave a religion is not only to change one’s beliefs; it is to risk becoming unintelligible to one’s own tribe. In the harshest cases, it is to risk exile. The mind treats that as a danger.  

This also helps explain why political leaders so often perform religiosity even when their lives show little evidence of it. Performance creates common knowledge. A staged photo with a sacred symbol is not primarily addressed to God; it is addressed to the crowd. It signals, “I am one of us,” and it invites the crowd to become complicit in acting as if that were obviously true. Once the performance becomes common knowledge, dissenters inside the coalition pay a social price for pointing out the obvious.

Therefore, the secular task is not only to critique supernatural claims. It is to build non-supernatural forms of common knowledge that can do some of the same social work. Something like this is already happening in the modern world. People gather around causes, institutions, professions, civic rituals, scientific identities, mutual aid networks, even exercise cultures. These can be silly or beautiful; freeing or authoritarian. The point is not that secular life lacks ritual. It is that secular rituals are often fragmented, unstable, less grounded with deep roots going back into one's family tree and ethnic culture, and less explicitly oriented toward moral formation.  The art and esthetics of the secular world is also far less-well developed than that of the religious world.  

Religion persists not only because people are credulous or fearful, but because religion solves hard social problems. Pinker’s concept of common knowledge helps explain how it solves them—sometimes in ways that elevate human life, sometimes in ways that deform it. And once one sees religion as a social technology of visibility—of signals, rituals, and shared scripts—one can critique it more honestly: not as a childish mistake, but as an ingenious human invention that exacts a price.

The deeper question is whether we can build a life, and a society, in which the best human goods that religion has traditionally coordinated—community, moral aspiration, awe, mutual care—can become common knowledge without requiring that we pretend, together, that comforting fictions are facts.

The Psychology of Religion, Chapter 27: Consciousness

There are many unanswered questions about how the universe works. Part of the wonder of science is appreciating that for every advance in understanding, there are always new horizons of the unknown to explore further.

I find that one existential frontier in understanding has to do with consciousness. Regardless of the various physical explanations about why we have conscious, subjective experience (of memory, drives, sensations, emotions, etc.) it remains truly miraculous that this occurs. It is true that consciousness exists on a continuum; it has definitely been sculpted by evolutionary forces, and is subject to a lot of variation, with diminished or gradually altered consciousness caused by sleep, fatigue, anesthesia, substances, neurological disease, etc. It is interesting to consider whether consciousness could be a property of nature itself, as opposed to a property only of a neurological system such as the brain. Some great scientists such as Roger Penrose have theorized about the mechanisms of consciousness; while I think such theorizing is interesting and worth following, I'm not sure that the result would impact my opinion of this matter too much. Even if there was a precise physical explanation, it does not lessen the miraculousness of it.

I find consciousness even more miraculous than "free will" since even if the universe was entirely deterministic or superdeterministic, there would still be human consciousness, which is something which deserves a feeling of wonder and awe. Some people would say that the phenomenon of consciousness is a manifestation of the divine -- and I guess I'd have to be ok with that, perhaps even as a foundational definition of the word "divine."

The Psychology of Religion, Chapter 26: religiosity, narcissism, and obsessiveness

The combination of religiosity with narcissistic traits is not rare, and it can lead people to insinuate—or directly assert—that their beliefs, their culture, and their moral grounding are simply better than those of people outside their faith. Boastfulness, arrogance, and self‑righteousness then function as a way of belittling others. Traits like this are sometimes rewarded inside a shared belief system, especially when confidence is mistaken for virtue. Once again, this violates religion at its best, which (in many traditions) emphasizes humility, kindness, and respect for outsiders.

Sanctimony is a related phenomenon: moral language used not primarily to understand right and wrong, but to signal superiority, to enforce conformity, or to punish dissent. In its mildest form it is simply performative piety; in its harsher forms it becomes a social weapon—one that can make ordinary people feel small or wrong.

Obsessiveness, as a personality style, refers to rigidity and narrowness, with intolerance of shades of gray, and a tendency to judge others harshly for deviations—large or small. (This is closer to obsessive‑compulsive personality traits than to OCD; it’s about rules and control more than about unwanted intrusive thoughts.) When this "Pharisaical" style fuses with religion, it can create families and communities where people live in a chronic state of being watched, measured, and morally scrutinized. The atmosphere becomes tense, cautious, and punitive—more about avoiding wrongness than cultivating goodness. Again, this runs against religion at its best, which repeatedly elevates higher values—love, mercy, generosity, humility—above rule‑keeping for its own sake.

To be clear, these traits are not “religious” traits; they are human traits. But when combined with religion they can often grow, or masquerade as piety.

The Psychology of Religion, Chapter 25: Speaking in Tongues

Some religions feature unusual behaviours that are accepted as manifestations of divinity. One example is glossolalia (“speaking in tongues”). Every cultural group has rituals that symbolize transcendence or divine intervention somehow, but it is concerning in modern times that people would treat this as a literal case of God “speaking through” someone, rather than as a human psychological and social phenomenon.

So what do we actually know about glossolalia? It usually isn’t the dramatic idea some imagine—suddenly speaking a real foreign language you never learned. Instead, it’s speech-like vocalizing: it has rhythm, emotion, and a kind of “word-like” flow, but it doesn’t reliably carry stable meaning or grammar the way a normal language does. When linguists study recordings, they tend to find that it draws heavily on the sounds and speech habits the person already has in their ordinary language—almost like a voice improvisation that feels like language, without functioning as one in the usual sense. When glossolalia happens in a context where it is expected, taught, and socially supported, it looks like a learned trance or skill—comparable to hypnosis, flow, or dissociation.

One can find examples online—there are widely circulated clips of a high-profile “faith leader,” close to a major political figure, performing “tongues” in public. I think a lot of people seeing this for the first time have a mixed reaction: perhaps, with a nervous smile, the thought "how can anyone take this seriously?" followed by some discomfort, and then a sharper concern once it lands that the performer has a large following of fervent supporters, and has mainstream political influence. It is deeply ironic that a communicative tool which does not carry any semantic meaning can be so persuasive to otherwise logical observers.

From a psychiatric point of view, glossolalia can be understood as a particular kind of altered attention state that can be learned, practiced, and performed. Put someone into the right mix of conditions—music, group emotion, high expectation, authority cues, shared language about the sacred—and a person can produce vocalizations that feel deeply meaningful. The speaker may experience it as surrendering control; the group experiences it as proof that something “beyond” is present.

This is where the social function matters most. Like “miracles,” and like behavioural restrictions that visibly mark membership, glossolalia can work as a signal: it makes the group feel special, chosen, and close to the divine in a way outsiders “don’t get.” That feeling is intensely bonding. It strengthens loyalty, rewards conformity, and makes doubt feel not merely intellectual but socially dangerous—almost like betrayal. The experience itself becomes the evidence, and the shared intensity becomes the glue.

Of course, the same machinery can be used for darker purposes. A leader who is skilled at spectacle and emotional orchestration can use these displays as persuasion technology: not by offering reasons, but by creating awe, certainty, and a sense of “we are witnessing the sacred.” The danger is not the oddness of the behaviour; it’s the way the resulting belief and allegiance can be redirected into real-world authority—sometimes including political authority, or as a tool to obtain financial donations—under a banner of divine mandate.

Friday, February 27, 2026

The Psychology of Religion, Chapter 24: Behavioural Restrictions

In some cases, religious groups prescribe particular foods, particular styles of dress, and particular behavioural expectations that are only loosely related to a moral issue—if they are related at all. Sometimes these practices can be understood as ordinary cultural variations with obscure origins. But often there is a sense that the rules are rigid and imperative, such that veering away from them is treated as an offence—either against the religious community or family, or against God. At times these restrictions make it difficult to live freely or comfortably in modern society.

One major function of these rules, in practice, is their signalling value: they remind others (and even oneself) of group affiliation and loyalty. This is comparable to other mechanisms groups use to bolster cohesion. When there are visible styles of appearance and behaviour that clearly mark membership, it becomes easier to find fellow members—and easier to be suspicious of outsiders. Over time, people can become fond of these behavioural symbols. They can evoke powerful feelings associated with the religion, and can function like wearing a ring with special significance every day and night for years, beginning in childhood. People may then feel uneasy or even guilty without it, and feel relief when they encounter others wearing the same symbol.

But if the “ring,” so to speak, becomes massive and cumbersome—if it begins to hinder ordinary life—then what once felt meaningful can become a kind of burden. (It starts to resemble the peacock’s tail: a costly display that signals loyalty, but at a real practical price.)

We see similar dynamics in modern culture in many settings—uniforms, subcultures, and corporate branding. Often these are harmless variations. The darker side appears when people do not wish to participate, when the rules become tools of control, or when symbols are used to suppress ordinary human behaviour—and when the person faces rejection or punishment from peers for noncompliance.

A related dark side of religious dogma is doctrine-based condemnation or discrimination against people whose lifestyles are not endorsed by the group. Often, at root, this is an ordinary human tendency—present in many non-religious settings as well—to exclude or denigrate people who are different, even when they are not harming anyone. But the best of religious texts call people to rise above this: to be inclusive, non-judgmental, and unfailingly loving toward everyone, not only toward those who share the same beliefs or lifestyle. There are various Biblical stories, for example, of reaching out in a loving, accepting way to members of groups that were widely vilified in their own time.

The Psychology of Religion, Chapter 23: Eschatology

Many religions have a view of the “end times”—what happens after death, and, in some traditions, how history itself will end. This is called eschatology. In some communities there is an almost excited anticipation of the world’s ending, paired with the idea of a glorious ascent of the worthy up to heaven (there’s that spatial metaphor again, taken quite literally by many, as though heaven must be “upwards”). Of course, those with this view usually assume they will be among the worthy. In turn, some people cultivate a kind of passive resignation about trying to improve the world’s problems: they say these are the “end times,” so why bother. And to some degree this kind of thinking can shape how people relate to society and politics—sometimes pulling them away from the work of changing the world.

I realize, of course, that eschatology doesn’t always produce passivity; in some forms it can motivate people toward reform or activism. But when apocalyptic belief becomes an excuse for disengagement—or an indulgence in catastrophe—it becomes a bleak and cynical example of what happens when dogma is taken literally. At its darkest, it can spill into extreme behavior, such as the Heaven’s Gate mass suicide in 1997. Even if the world were ending, it seems profoundly dishonourable to adopt passive resignation—let alone a smile of anticipation—about helpful action. It would be like watching a burning building with no attempt to help the people trapped inside, quietly nodding to yourself that heaven is getting closer.

I think most of us would agree that the most noble and beautiful actions humans are capable of are helpful and altruistic: working to improve a situation even when it is bleak or seemingly hopeless. A truly noble person would not be motivated by thoughts of a glorious heavenly reward upon death; they would be motivated to do good because of the intrinsic goodness of the action itself.

The Psychology of Religion, Chapter 22: Heaven and Hell

Many religions have concepts of Heaven and Hell: Heaven an eternal state of perfect happiness, and Hell an eternal state of punishment. Religious doctrines often advise that people live appropriately during their lifetime on earth, and after they die they will be judged and sent to one place or the other. In some doctrines, the criteria are not even that you live a good life (for example, to be kind, to not hurt others, to contribute to society, to make the world a better place, etc.) but rather whether you profess belief in a very particular way. Thus, one could be the kindest, most helpful person in human history, but still go to hell if the appropriate beliefs are not endorsed. Or one could commit the worst atrocities in history, and just be an all‑round hurtful person, yet go to heaven afterwards if the appropriate beliefs are endorsed.

This concept functions as a powerful engine of group affiliation using a combination of threat and reward. It is like a company offering permanent safety and support if you sign a lifetime membership, agree to promote the brand, and guarantee not to deal with competing companies. But the same company would also threaten to ruin you permanently if you broke the deal. There would be frightening rules in the contract, such that the act of challenging company policy would be branded with words like “heresy” or “apostasy,” discouraging anyone from questioning the status quo.

Such a system is in contradiction to the spirit of fairness, grace, and justice—the striving toward mature morality—present in religious doctrines at their best. An infinite punishment for a finite set of crimes does not make sense. And the idea of punishing someone not for a crime, but for having an idea, belief, or thought that does not conform to a prescribed norm, is contrary to most people’s concept of a healthy society, and contrary to the “bill of rights” ideals that many of us—religious or not—value highly.


Pascal's Wager

A classic argument used to prop up religious belief is Pascal’s Wager. The reasoning goes something like this: if you believe, and the religion is true, you gain Heaven and avoid Hell; if you do not believe, and the religion is true, you face infinite punishment; if the religion is false, there is little or no cost either way. Therefore belief is said to be the safest bet.

But this reasoning is preposterously invalid. First of all, one could apply the same logic to any number of mutually incompatible religions, each with its own reward-and-punishment scheme. Which one, exactly, are you supposed to choose? Many religions explicitly require that you renounce the others. One could just as easily invent a magical rabbit in orbit around the moon who grants eternal reward, or literal Santa Claus delivering salvation at Christmas, or Bertrand Russell’s celestial teapot drifting between Earth and Mars. The wager does not tell you which claim to believe. It merely exploits fear.

Second, it is a very poor moral foundation. It reduces belief to a selfish reward-or-punishment calculation: believe so that you can profit, or believe so that you can avoid pain. But this kind of motive is at odds with the lofty ethical language religions themselves like to use. If a deity valued sincerity, honesty, courage, and intellectual integrity, then strategic belief adopted out of self-interest would look shallow, selfish, and hypocritical.

Third, the claim that there is “no downside” to belief is obviously false. Much of the rest of this book is about that downside: the psychological distortions, tribal loyalties, guilt, fear, dogmatism, social coercion, and political consequences that can follow from false sacred beliefs. Belief is not cost-free. It can shape an entire life, a family, a culture, and a society.

So Pascal’s Wager is not a deep argument. It is a fear-based sales pitch dressed up as prudence.

----------

In the world, on average, roughly two people die every second—about 7,200 deaths per hour, and on the order of five million per month. Only a fraction of these people follow any one particular religious belief system. Therefore, if one holds a strict doctrine of Hell tied to a strict interpretation of “correct belief,” it would follow that thousands of people every hour—including many who lived gentle, kind, generous lives—would be banished into eternal punitive suffering because they did not endorse the right beliefs. Conversely, many who behaved cruelly all their lives could receive an infinite reward if they endorsed the correct beliefs at the last moment. Imagine an all-powerful divine creator, pushing about one person every second--many of them kindly elders who simply didn't happen to endorse the appropriate beliefs--into a flaming inferno.

If one truly believes this is the fate of countless people, one would be forced into a grim psychological choice: either adopt indifference to unimaginable suffering, adopt a horrific view of how reality works, or devote one’s life to converting as many people as possible so as to save them from hell. It would not make sense to devote one’s life to rescuing people on a smaller scale (being a firefighter, a physician, a therapist, a humanitarian worker), since this would distract from the colossal task of saving people from an infinitely worse fate than any earthly accident, illness, or war could impose. Proselytizing would seem to be the only fully rational altruistic activity. And if you wanted to “save the most people efficiently,” you would focus your efforts on those with shorter life expectancy, since their impending eternal suffering would arrive sooner. If one’s own friend or child strayed from the perceived correct religious involvement, it would be understandable—within this belief system—to view this as the most horrifying contingency imaginable, infinitely more devastating than losing them to illness, assault, or accident, because the imagined suffering would be permanent.

This is one reason the Heaven-and-Hell framework is so morally destabilizing. It incentivizes fear, coercion, and tribal control, while undermining the best ethical themes that religions also sometimes teach: compassion, humility, grace, and love.

There is a sentiment, often attributed to Mother Teresa, that I find ethically beautiful: if Hell truly existed, the only morally coherent response would not be righteous triumph or celestial indifference, but a willingness to abandon the paradise of Heaven to comfort those suffering in the abyss. To enjoy eternal bliss while remaining fully aware that others are enduring eternal conscious torment requires a catastrophic suspension of empathy. The impulse to forsake one's own salvation to sit with the damned represents a true transcendence of character. It highlights a profound theological irony: the highest conceivable expression of morality—unconditional, self-sacrificial compassion—demands a fundamental rejection of the traditional boundaries of divine justice. We should all strive toward such transcendence of character, prioritizing radical empathy over the selfish security of a gated paradise.


The Psychology of Religion, Chapter 21: Historical Atrocities

Humans have engaged in all manner of atrocities, and despite the horrors of the past century, we see repeatedly—across earlier centuries as well—how easily cruelty can be normalized, ritualized, and justified. The human capacity for harm is ancient. What is especially sobering, though, is how often major institutions—including major religions—can make cruelty feel righteous.


Many historical atrocities have occurred under the banner of religion, especially when religious identity fused with conquest, state power, or tribal domination. Charlemagne’s campaigns against the Saxons (772–804 CE), for example, fused military conquest with coerced Christianization; forced conversion was backed by severe legal penalties, and there were episodes of mass killing in the course of suppressing Saxon resistance, most notably the Massacre of Verden in 782, where 4,500 Saxon prisoners were reportedly executed in a single day. 

The Crusades (1095–1291) likewise included mass slaughter justified in explicitly religious terms: the Rhineland massacres of 1096 saw the destruction of Jewish communities in Speyer, Worms, and Mainz by crusader mobs, and the Siege of Jerusalem in 1099 ended with the indiscriminate mass killing of Muslims and Jews within the city walls.

The Thirty Years’ War (1618–1648)—driven in significant part by religious divisions between Protestant and Catholic states in the Holy Roman Empire, became one of the most devastating catastrophes in European history. Ending with the Peace of Westphalia, the conflict resulted in deaths in the millions, decimating up to a third of the population in some German territories, many due to famine and disease rather than battlefield combat, and leaving a legacy of psychological trauma and social ruin. 

The Spanish Inquisition (established in 1478 and lasting until 1834) created a terrifying machinery of coercion and intimidation, with religious motives explicitly invoked; the exact numbers are debated by historians, but the core point is not: it was a system designed to enforce conformity (targeting Jewish conversos and later Protestants) through fear, punishment, and (in many cases) execution.

Colonial movements in more recent centuries often deployed religious language—“civilization,” “salvation,” missionary uplift—as moral cover for economic extraction and domination. The Congo Free State terror under Leopold II (1885–1908) is one of the most infamous examples of colonial exploitation and brutality, resulting in the deaths of millions through forced labor and systemic violence. 

The transatlantic slave trade and slavery (spanning roughly the 16th to the 19th centuries) were likewise justified by many religious leaders and institutions in their own time (often citing the biblical “Curse of Ham” as a theological rationale), even as other religious figures became central to abolitionist movements. The point is not that religion uniquely causes exploitation, but that it has repeatedly been recruited to sanctify it.

The same pattern appears in Canadian history. “Christianization” was one motive—alongside state assimilationist policy—behind the Residential School system (which operated federally from 1883 until the last school closed in 1996). In this system, more than 150,000 Indigenous children passed through church-run, state-funded institutions characterized by coercion, cultural destruction, and extensive abuse, with many children dying and records often incomplete. 

The Spanish conquest of the Americas (beginning in 1492 and intensifying with Cortés's campaign against the Aztecs in 1519) similarly involved catastrophic Indigenous death and cultural devastation. While infectious disease accounted for much of the mortality, religious institutions were at best entangled with the colonial project and at worst active participants in its dehumanization, often reading the Requerimiento—a demand for submission to the Pope and Crown—to uncomprehending Indigenous populations before launching attacks.

Of course, in human history, violence and atrocity have occurred without religion, and secular ideologies have also justified horrors. But it is very clear that religions have not been reliably protective against the worst destructive drives of humanity. Worse, religious certainty has often been deployed to justify abuse, discrimination, and war—to lend the aura of sacred duty to actions that would otherwise look like what they are: cruelty, domination, and theft.

The Psychology of Religion, Chapter 20: Religious Abuse

Abuse is unfortunately common. It affects every type of community and family. I have seen numerous cases in which religious texts or elements of religious faith were used as tools to abuse innocent children. (To protect privacy, identifying details have been altered, and some examples are composites.)

This includes one of the worst cases of emotional abuse I have seen in my career.

In this case, a teenager with a gentle, intelligent, altruistic personality—living in an affluent household—was subjected to forced “family sessions” late at night. She would be made to sit for hours in her bedroom while various family members recited Bible passages in a formal, prosecutorial tone, directed by a brutal, controlling father. The purpose was not moral guidance; it was humiliation and intimidation.

The teenager was, in fact, actively involved in altruistic leadership at a church. But the family accused her of hypocrisy and of being a “false disciple,” citing passages such as Matthew 7:21–23 and Matthew 23:13–20, and repeatedly telling her, “God has abandoned you,” alongside threats that she would go to hell.   The profound irony, of course, is that the parents were weaponizing orthodox theology in order to exert brutal control, which is exactly the hypocritical, performative arrogance that these scripture passages warn against.  

Then the family would pivot to the Old Testament, including Deuteronomy 21:18–21, which describes a “stubborn and rebellious child” being stoned to death by the community. Because she was religious herself, this experience was not merely frightening; it was  torturous—permanently traumatizing—especially in combination with the family’s other abuse and neglect.

These episodes were interspersed with the family’s evangelical outreach efforts in the community, “to spread the word.” As is often the case, the parents were seen as pious and respectable by others. Of course, abusive behavior has complex causes, and in the absence of religion these parents might have weaponized something else. But in this family, the abuse worsened as religious involvement intensified. Congregants who were aware of what was happening were horrified, but they did little to intervene beyond offering prayer.

In another example, children of a very religious mother experienced profound daily neglect and emotional abuse for years. Once again, members of the religious community did little to change the situation other than pray. When one of these children later lived in a different environment with the other non-religious parent, her quality of life improved dramatically. She grew into an intelligent, kind, outstanding young woman—though she still carries post-traumatic symptoms from that earlier phase of life.

In another, a family had previously been happy and well-integrated with the extended family, but as they became more involved in extreme fundamentalist religion, their personalities seemed to change. They became dark, angry, and suspicious, eventually estranging themselves from the rest of the family. Threatening posters appeared on their property with scriptural warnings about hell. Attempts to reach out with kindness were met with scolding condemnations about religious differences. A particular low point was an angry, rambling religious rant delivered during the funeral service of a family elder. These changes tracked with the family becoming more insular and more committed to extreme beliefs and practices. To this day, I feel for the children who had to grow up in that environment.

I have seen numerous examples of estrangement: religious parents ostracizing, shaming, or shunning children over lifestyle or belief differences—sometimes with these actions encouraged and applauded by the religious community. In other cases, religious adults shunned their aging parents, depriving them of access to grandchildren, again with some pious explanation. As always, there are contributing factors beyond religiosity—personality traits, trauma histories, rigid family systems—but it is hard to deny that dogmatic belief, combined with community endorsement, can make these problems deeper and more entrenched.

One phrase I have heard from abusive religious parents is: “turn or burn.” I find this a concise epitome of a belief that often lurks in the background: if you don’t follow my belief, you deserve to be tortured forever. It is offered as an “invitation,” but it functions as a threat. It may even be well‑meant in some warped way, yet it violates the moral foundations the religion claims to represent. Surely, if a way of life is divinely inspired, it should be compelling because it is beautiful and ethically coherent—not because it terrifies people into compliance.

It can be clarifying to hear accounts from people who have escaped abusive religious communities. Megan Phelps‑Roper is one example. One of her most useful insights is not a clever argument against dogma, but a relational one: what helped her most was sustained contact with outsiders who treated her with compassion and respect—people who were willing to build a human connection before trying to debate her beliefs.

The Psychology of Religion, Chapter 19: Object Relations

Humans have a far more richly developed capacity for imagination than other animals. We can carry internalized representations of important relationships inside the mind. In a loose way, this resembles having an “imaginary friend,” but the point is not childish fantasy—it is a normal developmental achievement: the capacity to hold another person in mind when they are not physically present. This is one of the foundations of object relations theory, one of the more insightful and useful branches of psychoanalysis.

Developmentally, we are initially comforted by a literal parent. Over time, we can also carry in memory an internalized representation of the parent—something like an inner sense of their presence, values, and voice—which can be comforting and stabilizing even when we are alone. This helps us develop confidence and emotional continuity, and it helps us cope with separation and, eventually, grief if a loved one dies.

For many people, religious life includes an internalized relationship with an idealized figure they call God. In much Western Christian imagery (and often in people’s mental pictures), this figure is imagined in human form—often as a bearded man, sometimes portrayed as white—despite the Middle Eastern Biblical setting of the “Holy Land” and the diversity of human appearance worldwide. Many people experience this internal figure as gentle, kind, fatherly, all-knowing, loving, wise, consistent, coach-like, or even therapist-like. Others internalize a divine figure who feels stern or frightening, poised to punish wrongdoing. Often these images reflect what people have learned to associate with authority, safety, and love in their own families and communities—whether authority is experienced as warm and reassuring, or strict and punitive.

Just like relationships with living humans, people can become fiercely loyal to these internal relationship figures—sometimes to extremes, including willingness to suffer or die in service of what they experience as sacred. And because this relationship is experienced as profoundly real, it is unsurprising that many believers feel anger or grief when someone frames it as “imaginary,” or as an internal construct rather than an external reality.

Many traditions also include a personified concept of ultimate evil—often described in devil-like terms. Psychologically, this can make moral struggle more vivid and narratively coherent: it reframes temptation, cruelty, or regretful behavior as a battle against an external force rather than as a confrontation with one’s own capacity for harm. In a tight-knit community, shared belief in external evil can sometimes make reintegration easier: if wrongdoing can be attributed to “the Devil” rather than to the person’s character, the community may find it easier to forgive—especially if a ritual of repentance, prayer, or “deliverance” has been performed. But there is a downside as well: externalizing evil can blunt accountability, and it can also encourage projection—seeing “the Devil” in outsiders, dissenters, or scapegoats—fueling fear, prejudice, or moral panic.

Thursday, February 26, 2026

The Psychology of Religion, Chapter 18: Prayer

Prayer may mean different things to different people. For many, it is a meditative act: a type of philosophical reflection with existential themes, a kind of relaxation therapy, a “grounding” moment. The praying person may believe they are having a conversation with God. The manner in which God is understood to speak back is often taken in a broad, figurative way—for example, if the person subsequently has a new idea, an inclination, a redoubling of confidence, or a wave of emotion that feels like guidance. Other people may not expect that God will “speak back” at all; they may be content simply to vent, confess, grieve, or reflect within a reverent framework. In some ways this resembles classical psychoanalysis: the listener is largely silent, and the act of speaking—slowly, honestly, repeatedly—becomes the mechanism.

For many people, prayer is simply reflective or meditative: a grounding moment, a way to name fears and hopes, a way to feel less alone. But many people also pray for things—for an outcome to change, for an illness to heal, for a surgery to go well, for a war to end, for a relationship to mend. That kind of prayer is different. If it is literally effective, it would mean that events in the physical world are being altered—something in the normal chain of causation is being nudged off course. And if this were happening in a consistent, repeatable way, you would expect to see clear clusters of unusually good outcomes in places where people pray the most, or where the “right” kind of prayer is supposedly most common. You would expect the world to look, especially in more religious areas, as though the ordinary rules of physics are being bent on request. I am not aware of any such pattern.

When researchers have tried to test this carefully—especially with “praying for someone else” (intercessory prayer)—the results have not produced a solid, repeatable signal. A well-known example is the STEP trial in cardiac bypass patients: people were randomized to receive or not receive intercessory prayer, and another group was told with certainty that they were being prayed for. Overall, prayer did not reduce medical complications. Interestingly, the group who knew they were being prayed for actually did a bit worse: complications were reported in 59% of those certain they were receiving prayer versus 52% in a comparison group. One plausible explanation is psychological: once a person is told “people are praying for you,” it can quietly raise the pressure. What if I don’t get better? What does that mean about me? About God? About my faith? For someone already frightened and vulnerable, that extra layer—expectation, scrutiny, the sense that a spiritual “test” is underway—can add stress rather than comfort.


It is not hard to consider other thought experiments: if prayer were an instrumental force capable of altering physical reality, we would expect to see distinct epidemiological advantages in highly religious regions. We would expect higher rates of spontaneous remission from illness, fewer natural disasters, and lower mortality rates in areas where people pray more often or hold the supposedly correct beliefs. Yet, when comparing regions with similar socioeconomic and demographic baselines, this supernatural dividend is entirely absent; in fact, highly secular democracies consistently boast the best objective markers of societal health. While religion undeniably provides robust psychological comfort, social cohesion, and subjective well-being to its practitioners, the data reveals a strictly secular mechanism at play. This resilience is not derived from the literal truth of dogmatic claims or divine intervention. Rather, it emerges from the profound social capital of a supportive community and the stabilizing architecture of a shared belief system—even a fundamentally fictional one. Such overarching frameworks equip individuals with a coherent narrative, allowing them to intellectually and emotionally process adversity, uncertainty, and loss more efficiently. But as a mechanism for changing the external, physical world, prayer demonstrates no measurable effect.

Spatial Language


One small point about human religious behaviour, deriving from ancient practice, is the spatial language: “God above.” People sometimes literally look upward when praying. But “up” points in different directions depending on where you are on Earth; and it changes minute by minute as the Earth rotates, orbits the sun, and as the solar system moves through the galaxy. A person in Australia looking up towards Heaven is looking in the same direction as someone in North America looking downwards into the ground. It is a pre-Copernican spatial metaphor, entangled with the older intuition that “up is good, down is bad.”

Of course, “looking upward” is often figurative—but many people do take it quite literally. If one were going to take the gesture literally, it would be just as “valid” to look downward, or inward into one’s own body. If God is omnipresent, shouldn’t God be as present in the depths of the planet—or in our own bodies—as in the sky? The gesture tells us less about the geography of a deity than about the structure of the human imagination.

A related embodied metaphor shows up in some fundamentalist worship styles: people in an entranced state reach forward with their hands during songs or prayer—eyes half-closed, rocking, repeating sacred phrases, emotional intensity magnified by the synchrony of peers. This can be understood as a normal human ecstatic gesture, an ability present in all cultures with or without religion. But the gesture still implies a spatial location of God—reaching out to take God’s warmth with one’s hands, as though God were physically located just ahead, perhaps in the front of the building. Again, the scene tells us much about embodied human longing, and very little about the actual location of a deity.

Prayer & Empathy


The moral structure of prayer often mirrors the moral structure of empathy. Many people’s prayers are genuinely compassionate: they think of struggling friends or family members, or of terrible world events, and they ask for comfort, protection, and healing. But if prayer is believed to cause divine comfort to arrive, this raises an uncomfortable counterfactual: if the prayer had not occurred, would comfort have been withheld? Shouldn’t a loving deity comfort suffering people regardless of whether someone happens to pray for them—especially since some of the worst suffering on earth occurs in isolation, unnoticed, with no one else even aware enough to pray? It suggests a troubling arrangement where God’s help isn’t based on who is suffering the most, but on who is lucky enough to be noticed.

This is also where it helps to remember Paul Bloom’s critique of empathy (see my review of his book, Against Empathy). Empathy is often biased and therefore unjust: it is pulled toward people who resemble us, toward vivid stories, toward those whose suffering is emotionally dramatic, while neglecting the quiet, the distant, the stigmatized, and the statistically larger tragedies that do not come with a single tear-streaked face. Prayer often inherits this same distortion. We pray intensely for the salient and familiar, and far less for abstract fairness, or for the invisible victims who never make it into our attention.

Many prayers are not about others at all; they are about wishing something for oneself. There are battlefield prayers. Prayers before a medical procedure. Prayers for money, for a job, for the return of an ex-partner, for relief from chronic pain, for the outcome of a baseball pitch or a hockey game. As a meditative act, this is deeply understandable. But psychologically it can set up a reinforcement loop: if the prayer is followed by a good outcome, the person will naturally feel it “worked,” and will be bolstered to pray again. If the outcome is bad, the person may conclude they didn’t pray sincerely enough, or long enough, or correctly enough—or that God was busy, or displeased, or testing them. Either way, the practice becomes insulated from disconfirmation.

This helps explain why prayer works psychologically, even if the supernatural claims aren't true. As a form of meditation or reflection, it can be calming and help organize our thoughts. But as a way to change the laws of physics or alter the course of events, it has no effect but still functions as a self-reinforcing loop. When a prayer is followed by a desired outcome, it is taken as proof of God’s power. When it isn't, the failure is easily explained away—either God said 'no,' or we didn't pray with enough faith. This dynamic validates the belief system regardless of the result, but it places a burden on the believer—creating the illusion that their personal spiritual effort is the decisive factor in changing reality.


Next Chapter

The Psychology of Religion, Chapter 17: Shepherding

A related religious metaphor is shepherding. Jesus is called the “Good Shepherd,” and there are many other biblical passages that liken God to a shepherd. It is a beautiful image, and as a child I absorbed it in exactly that spirit: kindly pastoral artwork, a gentle man with a hooked staff, sunny hills, a flock of woolly friends, perhaps one little sheep who has wandered off and needs to be carried back to safety.

But it is worth pausing to remember what shepherding actually meant in that time and place. Sheep were not kept as pets. They were livestock: valued for wool and milk, yes, but also raised for meat—and sometimes for sacrifice. Sacrifice would involve securing the animal using iron rings in front of an altar, cutting the animal's throat, collecting its blood in a special container, the blood then splashed against the altar; next, the animal would be hung from a hook, skinned, then various organs would be removed and burned.


A shepherd’s role was not only protection and guidance; it also involved ownership, control, and (eventually) decisions about which animals would be killed, sacrificed, or eaten. In that light, “being shepherded” contains an unsettling double meaning: you are kept from straying, guarded from wolves, and held within the safety of the flock—but you are also being managed toward ends that are not your own.

And if we push the image just one step closer to lived reality, it gets darker in a way the children’s illustrations never hinted at. Imagine being a sheep in the flock: every so often the younger males—your cousins, in a sense—are taken away. Perhaps they are led toward a little shed at the edge of the field, or down a path behind a stand of trees, and they are simply never seen again. The flock goes on grazing. The shepherd is still “protecting” the flock. But the protection is inseparable from a system in which some members are quietly designated for disappearance.

To be fair, the Christian image in particular tries to invert the usual arrangement: the “Good Shepherd” is portrayed as laying down his life for the sheep. That is morally striking. Still, the metaphor does something psychologically and socially important: it trains us to admire a certain kind of relationship—one in which docility is a virtue, “straying” is a moral failure, and the authority to define what counts as straying belongs to the shepherd.

The phrase “sheep gone astray” appears repeatedly in scripture, usually as a metaphor for human misbehavior. But actual sheep that never “go astray” do not graduate into freedom; they remain in the flock under management. As a child I never thought of this. Now I think the metaphor is revealing, not because it proves anything on its own, but because it quietly captures a profoundly unsettling moral posture: the idealization of a passive, domesticated existence where total subjugation is rebranded as pastoral care, and where the ultimate reward for perfect obedience is to remain in a community where you and your peers are quietly led to a brutal end, dictated entirely by the whims of the shepherd.


Wednesday, February 25, 2026

The Psychology of Religion, Chapter 16: Sacrifice

Most religions have some form of sacrifice alluded to in their theology. Sometimes this involves literal offerings—killing and burning animals, or destroying valuable objects. Other times it is “bloodless”: giving money, time, obedience, or the renunciation of pleasures through fasting, abstinence, or celibacy. In all these cases, the underlying idea is similar: something costly is offered up, with the hope of securing meaning, favor, purity, forgiveness, protection, or communal belonging.


There are also sacrificial motifs that move disturbingly close to human sacrifice. In the Abrahamic traditions, for example, the willingness of Abraham/Ibrahim to sacrifice his son is presented as a peak test of obedience—and in Islam it is commemorated annually in Eid al-Adha, the “Festival of Sacrifice,” in which animal sacrifice functions as a memorial of that story. And in Christianity, the theme of sacrifice is carried into the central story of Jesus: a dramatic moral and symbolic reframing of sacrifice into self-sacrifice, offered “for others.”


If there really were an all-powerful deity, what sort of being would want a dead animal or a burnt work of art as a gift? One might think that a god worth revering would consider it a gift if you were to do good for other people, or care for the natural world. But sacrificial systems do not usually work that way.


Nor is sacrifice some oddity of the Abrahamic traditions. Across much of the ancient world, sacrificial traditions were common, and they were often brutal. Ancient Greek religion had animal sacrifice. Vedic religion in India revolved around yajña, sacrificial ritual. Ancient China too had elaborate sacrificial practices directed toward ancestors and higher powers, sometimes involving animals and at times human beings. The Aztecs are especially notorious for human sacrifice. And the roots of all this may go back shockingly far. A 2019 archaeological paper on symbolic destruction says that “the earliest evidence, dated to about 26,000 BP,” comes from Dolní VÄ›stonice, in the form of making and then “exploding” clay figurines. If that interpretation is right, proto-sacrificial or ritually destructive behaviour belongs among the earliest traces of symbolic culture that we have.


Reciprocity, Magical Thinking, and Social Technology


Sacrifice is, in my view, an extension of ordinary human ideas about reciprocity and gratitude—infused with magical thinking. In a community we do favors, give gifts, and care for one another. These behaviors can be altruistic, but they are also supported by norms of reciprocity. If one believes that a mystical power controls destiny, fertility, weather, health, wealth, or military success, it becomes psychologically “reasonable,” within that worldview, to give that power a gift—hoping for a return.


And once a person enters this mindset, the logic can become self-sealing. If you make sacrifices and misfortune still comes, you can conclude the offering wasn’t sufficient, wasn’t sincere enough, or wasn’t given with the right purity of heart—so you must increase it next time. If something good happens afterward, it feels like proof that the sacrifice worked, and should be repeated. In this way, practicing sacrifice can become an escalating, brutal, and destructive behaviour. The sacrificed animals—often the most vulnerable and least able to “consent” to the human story being told about them—do not get much say in the matter.


Another motivation for sacrificial rituals likely came from the brutal necessities of ancient life: hunting animals, or killing domestic animals for food. Most humans bond to animals easily, and it would be psychologically troubling to watch an animal struggle and suffer. Ritual can function as moral anesthetic: a way to consecrate violence, to assuage guilt, and to turn a grim necessity into a story of gratitude, order, and meaning.


Sacrifice can also be political performance. Public ritual can consolidate hierarchy, especially priestly hierarchy, display power, intensify fear, and signal unity. It is not hard to see how sacrifice functions as a kind of social technology: it makes shared belief visible and costly. It puts loyalty on display. It shows who is serious, who is obedient, who can be trusted, and who has the authority to declare what counts as holy.


This is also where sacrifice connects to group psychology. Some scholars have argued that costly rituals—things you would not do unless you were committed—operate as signals that strengthen trust and cooperation within a group, partly by filtering out free riders. A community bound together by shared sacrifice can feel safer, warmer, and more morally serious to its members. But that same mechanism can harden boundaries and intensify suspicion of outsiders.


And costly sacrifice does not merely send a signal to other people; it also works on the person making the sacrifice. People are generally reluctant to admit that they have suffered for nothing. So the greater the sacrifice, the stronger the pressure to reinterpret the suffering as meaningful, noble, or necessary. That helps make sacrificial systems self-protective and self-reinforcing. The cost itself becomes part of the “evidence” that the belief must matter.

Kin Altruism


Speaking of reciprocity: it is a strongly selected trait to favor and help genetic relatives, sometimes even in self-sacrificial ways. If there is a person who has a trait that causes them to selectively help close relatives, then that trait will tend to persist in the family line, because close relatives are more likely to carry the same genes that helped produce that tendency in the first place. This is a simple evolutionary logic: kin altruism increases the survival and reproductive success of the shared family “pool,” even when it costs the individual something in the short run.


But humans do not walk around calculating degrees of genetic relatedness. Instead, we rely on crude, fast estimates—cues that, over most of human history, were often correlated with kinship and shared ancestry. People who live near each other, marry each other, and raise children together will, over generations, tend to share not only genes but also language, accent, customs, dress, habits, and social norms. They may also tend, on average, to resemble one another physically more than they resemble people from a distant village, tribe, or lineage. Conversely, people who look different, speak differently, or practice very different customs are often from a different village, tribe, or family network—and therefore are somewhat less likely to be as closely genetically related as the people who share your immediate cultural and familial world. 


Similarity of appearance, familiarity of accent, shared habits, shared rituals, shared dress, and shared taboos can all become proxies—very imperfect proxies—for “one of us.”  Religion gives people common dress, common restrictions, common foods, common sacrifices, common songs, common stories, and common enemies. In other words, it manufactures the feeling of kinship, even among people who are not literally kin.


The mind has evolved to be slightly more generous, trusting, and self-sacrificing toward those who are more likely to be “one of us,” so it follows that it may also be less generous, more suspicious, or more emotionally distant toward those who feel like “not us.” These tendencies are not destiny, and they are not moral justification—but they are part of the psychological and evolutionary foundation of prejudice. These are precisely the sorts of inherited inclinations we must learn to recognize, challenge, and actively override.


Belonging and Group Boundaries


Religion can sometimes widen the circle of felt family. But it can also strengthen the distinction between those inside the group and those outside it. Once sacrifice, loyalty, and group identity are fused together, shared customs can take on unusual emotional and moral weight, and group boundaries can begin to feel especially important. The stronger those boundaries become, the easier it is for outsiders to be viewed with suspicion, distance, or moral distrust. This does not mean religion always produces hostility, or that it does so uniquely. These are broader features of human social psychology. But religion can give them a sacred language, a ritual structure, and a greater sense of seriousness. In that way, stronger religious boundaries can contribute to increased exclusion and, in some cases, increased hostility between groups. Religion does not invent this psychology, but it can reinforce it.




The Psychology of Religion, Chapter 15: Spirituality & Superstition

Humans have cognitive tendencies that make superstitious beliefs easy to generate—and hard to extinguish. By “spirituality” here I do not mean awe, contemplation, or reverence in a broad sense; I mean the more specific belief that hidden forces—fate, synchronicity, spirits, or nonphysical “energy”—are actively guiding events. Beliefs in spirits, ghosts, magic, luck, or fate guided by mysterious forces are widespread across cultures. The specifics vary wildly from place to place—local spirits, protective rituals, sacred objects, invisible dangers—but the underlying psychological grammar is familiar.

Meaning, Pattern, and Agency

A core ingredient is pattern-seeking. The mind craves meaning, and when the world is uncertain or painful it will often manufacture meaning rather than tolerate ambiguity. This is not a sign of low intelligence; it is ordinary cognition under stress. Pattern-seeking is only part of the story: humans also readily detect agency, intuit purpose, and imagine hidden minds or forces operating behind ambiguous events. When people feel a loss of control, they become more likely to perceive patterns—even illusory ones—in the environment, and to treat coincidence as signal. Superstition can be emotionally satisfying precisely because it converts randomness into a story.

Stories, dreams, unusual experiences, and compelling anecdotes can then become socially transmissible. Once a few people begin to interpret events through a “hidden forces” framework, the framework spreads: it gives language to fear and hope, it creates a sense of specialness, and it offers the pleasure of explanatory closure. Coincidences become “signs.” Ambiguous perceptions become “messages.” A confusing life becomes a legible plot.

Beliefs about fate, synchronicity, or “good and bad energy” fit neatly into this same psychology. A person has a strong feeling—dread, relief, attraction, foreboding—and the mind is tempted to treat that feeling as information about the outer world. A difficult decision can then feel as though it has been answered by “the universe.” A coincidence becomes destiny. A run of bad luck starts to feel orchestrated. The step from “this feels meaningful” to “this is objectively meaningful” is, for many people, quite small. In cultural settings where unusual feelings are already given a supernatural vocabulary, it becomes even easier for an ordinary human experience to be interpreted as fate, guidance, or invisible force.

Why It Can Feel Helpful

Sometimes these beliefs can even confer a short-term psychological benefit. A ritual, talisman, or conviction that one has “good energy” behind them can reduce anxiety, increase confidence, and make a person feel more ready to act. In that sense, superstition can work a little like prayer, placebo, or a pre-performance routine: it changes the person’s emotional state, and that changed emotional state can sometimes improve performance or endurance. But this does not validate the supernatural explanation. It shows that belief can alter mood, attention, and confidence—not that a mystical force is operating in the background.

When Meaning Hardens into Causation

The trouble begins when a poetic or emotionally satisfying interpretation hardens into a literal theory about reality. At that point there is no longer only a harmless sense of wonder; there is a false model of causation. There is still no robust, independently replicated body of evidence that psychic forces, spirits, or nonphysical “energies” of this sort are objectively guiding events in the way believers often suppose. And there is no good reason to treat a strong feeling of destiny as evidence that destiny is real. Once such beliefs are treated as evidence, judgment begins to drift away from probability, base rate, character, and practical consequence. A person may stay in a bad relationship because it feels “meant to be.” They may avoid a sound medical treatment because the illness is thought to be spiritual. They may take reckless risks because fate is presumed to be protective. Life planning becomes poorer when omens and vibes displace sober thinking about what is actually happening.

When Superstition Turns Social

There is a darker social risk as well. Once people begin to believe that invisible moral or spiritual contamination clings to persons, places, or groups, superstition can become a license for prejudice. History offers grim examples of what happens when communities weaponize these causal illusions. The early modern European witch crazes, which claimed tens of thousands of lives, were driven in large part by the urge to assign occult causality to illness, infant mortality, crop failure, or social misfortune. Medieval blood-libel accusations and later pogroms during epidemics drew on related fantasies of hidden contamination and malevolent agency. A more contemporary example can be seen in the persecution of people with albinism in parts of Sub-Saharan Africa, where witchcraft beliefs and ritual myths still endanger lives. In modern, everyday life, the seeds of this same pathology are more banal but equally insidious: a neighbour is said to have “dark energy.” A house is called cursed. A child is treated as spiritually tainted. A stranger is felt to be threatening in some occult way rather than simply unfamiliar. Once a group shares such assumptions openly, they no longer remain private quirks of interpretation; they coalesce into a moral atmosphere in which exclusion, suspicion, and even physical violence feel justified. This is how irrational belief can slide from the sanctuary of private comfort into the arena of public harm.

A Humane Response

At the same time, this topic calls for sensitivity. For the person immersed in such beliefs, the experience does not feel frivolous. It may feel visceral, self-evident, and woven into memory from early life. It may have been reinforced for years by trusted friends, family, charismatic figures, selected anecdotes, online communities, and a steady diet of “paranormal” documentaries or videos that showcase apparent hits while ignoring the endless misses. When a belief has been stabilized by familiarity, repetition, and community endorsement, challenging it can feel less like an intellectual correction than like an invalidation of lived experience. The humane response is not to mock the feeling. The feeling is real. What deserves challenge is the conclusion drawn from it.

A Psychiatric View

From a psychiatric point of view, there is also genuine individual variation in proneness to unusual, mystical, or numinous experience. None of this means the experience is itself pathology. Some people reliably feel awe, presence, synchronicity, and “spiritual certainty,” while others rarely do. Some people seem to have a more absorptive mind: more prone to inner vividness and felt significance. This is shaped by personality and temperament, by culture and reinforcement, and by biology. One useful but imperfect metaphor is that some minds run with higher “gain”: experience arrives vivid and compelling, but with a greater risk that noise is interpreted as signal. Salience systems in the brain are part of this story, though the biology is not reducible to dopamine alone. A related literature suggests that paranormal belief is associated, on average, with more intuitive thinking styles and some weaknesses in probabilistic thinking and analytic reasoning, though of course none of this maps neatly onto any one individual person.

Spirituality and Religion

Many members of organized religions disparage “superstition” or free-floating “spirituality.” Yet in psychological terms—at the level of cognitive ingredients—the differences are often of degree rather than kind. Organized religions tend to formalize these human tendencies into institutions: they standardize the stories, professionalize the interpreters, and link belief to group identity and obligation. “Spirituality,” in contrast, often keeps the intuitions while loosening the institutional grip. But both draw on the same human appetite for meaning, comfort, narrative, and relief from uncertainty.

Next Chapter

The Psychology of Religion, Chapter 14: Religion as a Business

Many religions and other spiritual practices operate partly like a business. There is marketing (proselytization, outreach), branding (symbols people wear on clothing or on necklaces), encouragement to be loyal to your brand, and criticism of other brands. But then there is also a financial commitment, leading to an organized financial structure. There is work to be done by members of this structure, with an ultimate goal—explicit or implicit—of retaining and expanding membership, eliciting volunteering efforts and financial contributions, and maintaining morale.

With some intensely tribal, high-commitment groups (fraternities are the obvious benign example, gangs the darker one), there can be an onerous initiation ritual. Social psychologists have shown that when people have to work hard, endure discomfort, or pay a steep price to join, they often become more loyal afterward—partly because the mind naturally tries to justify what it has sacrificed. Religions also commonly have initiation processes: potential members may be vetted, attend educational sessions, and then take part in some public ritual in which solemn commitments are made.

Sometimes, as with luxury business models, broad proselytization does not occur; instead, the “product” is restricted. Only a select few gain entry. In some traditions you need advanced membership—often taking years—before you are allowed to enter certain beautiful buildings such as temples, or partake in certain deep rituals. Sometimes only men are allowed into certain leadership roles or ritual spaces. These obstacles increase the allure and tend to attract people willing to contribute more commitment, time, and money. If everybody had a Rolex watch or a Gucci bag, it would cease to be as special; exclusivity is part of what makes the object feel “high-end.”

One particular feature of religion that resembles a corporate tactic is the elevation of belief alone—faith—as a key virtue. Belief without evidence is not merely tolerated; it is often praised. If a corporation could successfully propagate that idea, it would be extremely useful for marketing, since people would form loyalty to the brand without looking too closely at “reviews.” Doubt could be reframed as weakness, betrayal, or impurity. Meanwhile, “true believers” are rewarded: their status, trust, and esteem in the community rises in proportion to their loyalty.

In many cases religious institutions amass vast wealth: in property, buildings, and investments. In at least some prominent modern examples, credible reporting and public filings have described religious investment holdings on the order of tens of billions of dollars, with wider claims in some cases exceeding $100 billion—figures that are difficult to reconcile with the ordinary believer’s image of humble spiritual stewardship. And these structures often operate with significant tax advantages. In the United States, churches are generally treated as tax-exempt. In Canada, registered charities (including many religious organizations) are exempt from paying income tax while registered. 

And yet, some of the most insightful cautions about wealth come from within religion itself. One of the sharpest is the line attributed to Jesus (present in all three Synoptic Gospels): “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”

The Psychology of Religion, Chapter 13: to be a scholar, you had to study theology

For long stretches of European history, many able people who wanted to study the biggest questions ended up studying religion. That was not always because religion had the best answers. Often it was because religious institutions controlled much of the schooling, the books, and the road to public influence. If you wanted literacy, training, status, or a respected voice in your community, religion was often the main gate you had to pass through.

Before the printing press, and before secular universities became common, the Church often controlled many of the material conditions of scholarship as well: the copying of books, access to manuscripts, and much of the economic support that allowed a person to read, write, and think rather than spend life in manual labor. In practice, that often meant that to be a scholar was to be funded, housed, trained, or at least tolerated by a religious institution. That inevitably shaped what could be said, what could be explored, and how far a person could go.

This created a built-in bias. It was not just that highly capable people happened to like theology. It was that theology sat near the center of educated life. If you wanted mentors, libraries, credentials, or a place to teach and write, you often had to work inside a religious setting, or at least learn to speak its language. In that kind of system, religion could borrow prestige from the educated people who passed through it.

That distinction mattered. When people saw a brilliant, educated, generous person who was also a priest, minister, or theologian, they could easily draw the wrong lesson: if someone this thoughtful believes it, maybe the religion itself must be true. But that does not follow. A person can be wise, morally serious, and deeply useful to society while still believing things that are false. Learning and talent do not make a doctrine true.

There was another pressure as well. In many periods, if you were a serious thinker and openly challenged the religious system, you were not just risking an argument. You could lose your position, your audience, your safety, or even your life. So the historical record is not a fair contest in which every idea had the same chance to survive. People who stayed within accepted limits were more likely to keep teaching, keep publishing, and keep being remembered.

Galileo is a clear example. He used a telescope to study the sky and argued publicly that Earth moved around the sun. Today that sounds ordinary. In his time it crossed powerful religious limits. He was put on trial in 1633 and spent the rest of his life under house arrest. Whatever one thinks about all the details of the case, one lesson is plain: a scholar’s standing and safety could depend on not going too far beyond approved belief.

Giordano Bruno shows a somewhat different version of the same pattern. He put forward bold ideas about the universe and about religion itself, and authorities judged some of those ideas unacceptable. He was executed in Rome in 1600. The point is not to turn him into a simple hero of science. The point is that religious power helped set the boundaries of acceptable thought, and crossing those boundaries could be deadly.

And this pressure was not limited to astronomy. Reformers, translators, and other dissidents were also punished severely in different times and places. Even when the issue was not a new scientific discovery, the deeper conflict was often the same: who gets to define truth, and what happens to you if you say otherwise in public?

Europe was not unique in this broad pattern. In the Middle East and the wider Islamic world, serious learning often grew around mosques, religious schools, and scholars of law and scripture. In China, higher learning and public advancement often depended on mastery of the Confucian classics and success in the imperial examination system. In India too, advanced learning often grew around religious traditions, temple settings, monasteries, and learned priestly circles. The details differed from place to place, but the larger point remains: when one tradition controls the road to education and status, its ideas can start to look far more intellectually established than they really are.

To be fair, religious institutions also preserved and passed on learning in many eras. They copied books, trained students, and helped keep scholarship alive. Several well-known universities began in religious settings or still carry that identity. So the story is not simple. But that is exactly why the bias is easy to miss. When the same institution both protects learning and sets the limits of belief, educated people will naturally be overrepresented inside that system.

You can still see a milder version of this today. Many excellent private schools, colleges, and universities are sponsored by religious organizations and maintain very high academic standards. Some of them also require students to take courses in religion, attend chapel, or absorb a broader moral and intellectual outlook shaped by a faith tradition. That does not mean these schools are weak academically; some are excellent. But it does mean that strong education and religious commitment can be packaged together in a way that makes the religion seem more intellectually confirmed than it really is.

There is a second modern echo too. In some religious schools, seminaries, tightly bound communities, and other closed systems, career paths and social acceptance can still depend on affirming the right beliefs. The penalties are usually softer now—not execution, but loss of job, loss of status, loss of community, loss of belonging. The basic pattern is similar: a belief system becomes part of the ticket into a valued world. And once again, this can make it look as though the best minds support the belief, when some critics have left, have been forced to stay quiet, or have been pushed out.

So the old link between theology and scholarship was often not straightforward proof that theology was true. Much of it grew out of history, institutional control, and control over the means of education. For long stretches of time, if you wanted a serious education, you often had to study religion; and if you wanted to keep your place in public intellectual life, you often had to stay inside religion’s limits. That alone can make religion look intellectually stronger than the evidence for its literal claims really is.


Next Chapter