Evil: Between Circumstance and Disposition

Evil: Between Circumstance and Disposition

The claim that “evil does not exist” offers seductive comfort in our contemporary moment. It suggests that all human harm can be explained away through trauma, ideology, or circumstance—that beneath every atrocity lies a victim of forces beyond their control. Yet this denial, however psychologically appealing, fails to account for both lived experience and the wisdom of traditions across the globe that have grappled with evil’s reality for millennia.

The Persistent Duality

Across human cultures, a pattern emerges that refuses both naive optimism about human nature and cynical despair about human prospects: the recognition of evil as both universal potential and rare embodiment.

Religious Wisdom: The Universal and the Particular

Religious traditions worldwide have long navigated this tension. In Christianity, Augustine and Aquinas understood evil as privation—parasitic on goodness, lacking independent essence—yet the tradition simultaneously recognizes agents who willfully choose destruction. It can speak of evil’s ultimate unreality while acknowledging figures like Satan or earthly tyrants who embody malevolent will.

Judaism offers the yetzer hara, the evil inclination present in all humans, alongside stories of figures like Pharaoh whose hearts become hardened beyond redemption. Islam acknowledges how Shaytan’s whispers can lead anyone astray while identifying certain individuals as “corrupters on earth”—those who seem fundamentally oriented toward destruction.

Buddhism sees evil arising from the universal poisons of greed, hatred, and delusion, yet personifies persistent temptation in Mara. Hinduism recognizes the interplay of dharma and adharma, while acknowledging that some souls become so entangled in maya and negative karma that they embody destructive patterns across lifetimes. The Bhagavad Gita speaks of those who, “knowing what is right, still choose what is wrong.”

In African traditional religions, evil often appears as imbalance—a disruption of cosmic harmony that anyone might fall into—yet there are also concepts like the Yoruba notion of certain individuals whose ori (destiny) seems bound to destructive paths. Native American traditions similarly balance the potential for any person to lose their way with recognition that some become consistently harmful to the community’s wellbeing.

Even traditions emphasizing cosmic harmony, like Confucianism and Daoism, preserve stories of tyrants whose cruelty seems to transcend circumstantial explanation. The Dao encompasses all, yet some individuals appear to embody persistent disharmony.

Ancient Greek thought offers its own version: while anyone might be led astray by hubris or circumstance, figures like tyrants in their tragedies represent something deeper—a fundamental corruption of character that goes beyond mere error.

Psychology: The Ordinary and the Exceptional

Modern psychology tells a similar story. Most human cruelty turns out to be situational: Milgram’s obedience experiments and Zimbardo’s Stanford Prison Experiment show how ordinary people can be induced to inflict extraordinary harm. Hannah Arendt captured this with her phrase “the banality of evil”—most atrocities emerge not from demonic intent but from thoughtlessness, conformity, and moral abdication.

But psychology also confirms something more troubling. Research on psychopathy and sadism reveals that a small minority—less than one percent of the population—appear genuinely inclined toward harm. This statistical rarity matters: we’re not describing a common human variant but an exceptional one. For these individuals, cruelty isn’t just a response to pressure but seems to emerge from character itself. They harm not because they must, but because they choose to.

Philosophy: Freedom, Relation, and System

Secular philosophy has explored these same themes through different lenses. Kant spoke of “radical evil”—the willful choice to subordinate moral law to self-interest. For him, evil wasn’t mere weakness but the deliberate perversion of human freedom. It was rare but undeniably real.

Nietzsche, despite rejecting traditional morality, acknowledged what he called “ressentiment”—the active diminishment of life by certain human types. While denying metaphysical evil, he admitted dispositional tendencies toward life-denial that echo religious ideas about evil character.

Levinas located evil in the refusal to acknowledge the Other’s humanity. For him, atrocities happen when responsibility for others is denied. This links evil to relational failure rather than metaphysical essence, yet his framework still admits that some people seem persistently closed to ethical encounter.

Bauman’s analysis of the Holocaust showed how evil thrives within bureaucratic rationality, revealing how institutions provide cover for both situational compliance and dispositional malice. Camus, in The Plague, presented evil as both universal threat and resistible force—something requiring constant vigilance without falling into despair.

Why Evil Spreads

Here’s a crucial insight: evil rarely coordinates effectively on its own. Dispositionally malicious individuals compete more than they cooperate—their fundamental orientation toward exploitation makes stable alliances nearly impossible. For evil to achieve systematic expression—to become genocide, slavery, or totalitarian oppression—it needs to borrow structures from cooperative society: institutions, ideologies, and cultural mechanisms that let it parasitize ordinary human compliance.

This coordination failure explains why history’s greatest atrocities, from ancient tyrannies to medieval inquisitions, from colonial genocides to modern totalitarian states, follow similar patterns. A small number of genuinely malicious actors manipulate existing systems, exploiting the situational susceptibility of otherwise decent people. Evil spreads not through multiplying evil individuals, but through corrupting normal human psychology and social vulnerabilities.

Beyond Simple Denial

The claim that “evil does not exist” offers seductive comfort in our contemporary moment. It suggests that all human harm can be explained away through trauma, ideology, or circumstance—that beneath every atrocity lies a victim of forces beyond their control. Yet this denial, however psychologically appealing, fails to account for both lived experience and the wisdom of traditions across the globe that have grappled with evil’s reality for millennia.

The contemporary impulse to deny evil’s reality captures something important while making a logical error. Yes, most human cruelty is circumstantial, explainable through trauma, social pressure, and systemic forces. This recognition matters for effective intervention and prevention. But the absence of definitive proof for dispositional evil isn’t proof of its absence—particularly when historical evidence and psychological research point consistently toward its reality, however rare.

The category “evil” also serves essential moral functions that purely descriptive language can’t. It operates as a boundary marker for the unacceptable, a term of moral shock that pierces through euphemism and rationalization, and rallying language for collective resistance. To abandon it is to weaken our moral vocabulary precisely when we need it most.

The sober truth requires holding both realities simultaneously: evil exists as a rare but real disposition in some individuals, even as it remains a universal potential that circumstances can activate in almost anyone. Religious traditions preserve this duality through their stories of universal inclination and particular incarnation. Psychology confirms it through experimental evidence and diagnostic categories. Philosophy reframes it through analyses of freedom, relationality, and systematic dynamics.

Toward Moral Clarity

To deny evil entirely leaves us without adequate language for the worst human actions and insufficient tools for prevention. To overinflate evil paralyzes moral judgment and social action. The mature response recognizes that dispositional evil, while affecting less than one percent of the population, remains real and dangerous when it gains institutional power.

This recognition demands neither naive optimism nor cynical despair, but rather sustained vigilance—toward both the circumstances that can corrupt ordinary people and the rare individuals whose corruption seems to transcend circumstance. Only by acknowledging evil’s reality in both forms—as universal human potential and exceptional human disposition—can we hope to resist its expression in either.

Objections and Replies

Any account of evil must face the skeptical objection: we cannot know whether evil is innate or circumstantial, because we cannot access another person’s inner life. If that’s so, then the distinction between dispositional and situational evil is meaningless, and judgments of “evil” are presumptuous at best.

This objection has force, but it doesn’t succeed. Several replies are available:

Fallibility doesn’t erase categories. The fact that we sometimes misclassify phenomena doesn’t mean the categories themselves are invalid. We occasionally confuse red with orange, yet both colors exist. Likewise, the possibility of misjudgment doesn’t nullify the distinction between situational and dispositional evil.

We judge by patterns, not private access. We don’t need privileged access to another’s inner life to recognize recurring shapes of behavior. If someone repeatedly and eagerly seeks opportunities to harm, even across varying circumstances, the pattern itself justifies our judgment. Categories arise from public observation, not private certainties.

The distinction has practical consequences. Even if we only ever perceive outcomes, it matters whether harm is situationally induced or dispositionally driven. The situationally corruptible can often be redirected or rehabilitated; the dispositionally malicious require containment and constant vigilance. To erase the distinction is to flatten vital moral and political differences.

The objection itself assumes too much. The skeptic claims that because we cannot know perfectly, we cannot know at all. But absence of definitive proof isn’t proof of absence. The historical and psychological record consistently suggests that dispositional malice, while rare, is real. Denying the category isn’t humility but overreach.

In short, caution in judgment is wise, but categorical denial isn’t. Evil may be difficult to classify in practice, but difficulty doesn’t equal impossibility. The recognition of dispositional evil remains necessary if we are to describe human reality truthfully and equip ourselves to resist its most dangerous forms.

Process & Results

“No effort in this world
is lost or wasted;
a fragment of sacred duty
saves you from great fear.”
-The Bhagavad-Gita, Chapter 2, Verse 40 trans by Barbara Stoler Miller

For many years, I have believed that process is more important than product. You do not always have control over outcomes. Even with our best efforts, it is often the case that people fail.

However, I have recently come to see another kind of failure among people that only care about process. When you remove results from consideration, then a certain subset of people believe it is enough to have made an attempt, to have participated. However, they do not want to be responsible for outcomes.

These people wash the dishes. However, it does not concern them whether the dishes are clean after their process. To this archetype, this pattern repeats across a wide swath of their activities. It is enough to have made some minimal effort. I called. I talked to someone. They performed some action, and this is enough. It does not matter to them whether they accomplished the task, or not. In some cases, people with this problem desperately do not want responsibility – not to accomplish a task or even for their own lives. They want someone else to blame for their problems.

This reveals a problem in focusing on process. The implicit assumption is that people involved in the process are making their best effort. They are improving the process. The reality is that you cannot improve a process without a focus on outcomes nor can you judge how well a process works without them.

If you are just going through a process and completing it counts, you simply have a system to rationalize failure.

Shadow Libraries: Library Genesis, ZLibrary & Sci-Hub

I’ve never seen the term “shadow libraries” mentioned in this blog post before. I had heard of Sci-Hub. But, I’m not a scientist, and I have never needed to access it. But, I does make me wonder.

Open Question: How does one balance how copyright helps to foster an environment where people conduct research and against the negatives associated with restricting access to that information?

In a print paradigm, the medium is a bottleneck. So, you need to provide incentives for publishers to publish a work. But, in a digital environment, the material costs have largely been eliminated or transferred to the reader.

Of course, there are costs of selection, peer-review, editing and the other functions of a publisher. But, it seems to me that capitalism is a horrible system for an information architecture, particularly in the sciences where much of the funding for foundational research is either paid by governments or are channeled through public universities. Research that cannot be accessed is no different from research that was never conducted at all.

America’s Modern Character: Paranoid Loser

“[Columbia professor Adam Tooze, writer of the definitive forensic analysis of the 2008 financial crisis in Crashed: How a Decade of Financial Crises Changed the World,] does not buy the line that America is roaring back at the head of a resurgent West, even if the autocracies have suffered a crushing reverse over recent months. ‘I see America as the huge weak link,’ he said.

He broadly subscribes to the Fukuyama thesis that the American body politic is by now so rotten within, so riddled with the cancer of identity politics that it is developing a paranoid loser’s view of the world. The storming of Congress was not so much an aberration under this schema, but rather the character of modern America.”

Ambrose Evans-Pritchard, “The world’s financial system is entering dangerous waters again, warns guru of the Lehman crisis.” The Telegraph. May 23, 2022.

Open question: Is the current populism and “paranoid style” of the American character an sign of decline or a trait that becomes more prevalent with populist resurgence?

The paranoid character of U.S. politics is not a new claim, see the Richard J. Hofstadter essay, “The Paranoid Style in American Politics.” The online version is Harper’s Magazine is currently behind a paywall. But, I’d imagine most city public libraries have a copy of it.

The paranoid style is a recurring feature of populist movements, right and left, evident from so-called militia/patriot movements to the “woke” left of our time. Nothing is really new about either. But, is there something new in this wave? Is it significantly different than movements that led to prohibition of alcohol and marijuana?

I’m inclined to see the current environment as a variation on a consistent pattern, like the Great Awakenings. Ultimately, these kinds of heated discussions are the strength of democracies, even when they lead to things like the U.S. Civil War. You get your say. If you feel strongly enough, you fight about it. But, in the end, a decision is made and you see how it goes. It’s not dictated by some clown at the top. It’s messy. But, it’s better than the alternative.

Making Friends [on the Internet]

Summarized:

“[1.] follow people you resonate with.

[2.] engage with bigger accounts, support smaller accounts.

[3.] ask questions, offer suggestions, share learnings.

[4.] pay attention to who keeps popping up.

[5.] use the algorithms to your advantage.

[6.] attend virtual events. participate! 

[7.] attend offline events! Be adventerous.

[8.] send that dm / email / offer to connect.

[9.] if they don’t respond, try again in a few months.

[10.] put your thoughts out there.

-Jonathan Borichevskiy, “Making Friends on the Internet.” jon.bo. May 2, 2022.

Open question: How do you make new friends that will help you move in the direction you want your life to move and be fellow travelers?

The thrust is correct. If you want to make offline friends, you need to orient your online presence to make offline connections. However, there’s a bit of an age-bias. When you are 25 and single, it’s a lot easier to go to meeting on a lark. As you get older, it gets more difficult. You have to arrange a babysitter. There’s also the time to consider. Here’s a rough chart of time and quantities of friends a human brain tends to top out at:

  • 5 intimate friends (+200 hours)
  • 15 close friends (80-100 hours)
  • 50 general friends (40-60 hours)
  • 150 acquaintances (10-20 hours)

The problem, as you get older, is: how do you find those hours to spend with someone? The easiest method is some social institution, such as a church. Over a year, it should be possible to pick up a few friends and acquaintances from a church.

So, the above is how to make an initial connection with someone, and it assumes that you bridge these hours in some way. This is much harder, as you get older. But, perhaps something to think about when you start new chapters of your life.

The Fable of the Dragon-Tyrant

“Once upon a time, the planet was tyrannized by a giant dragon. The dragon stood taller than the largest cathedral, and it was covered with thick black scales. Its red eyes glowed with hate, and from its terrible jaws flowed an incessant stream of evil-smelling yellowish-green slime. It demanded from humankind a blood-curdling tribute: to satisfy its enormous appetite, ten thousand men and women had to be delivered every evening at the onset of dark to the foot of the mountain where the dragon-tyrant lived. Sometimes the dragon would devour these unfortunate souls upon arrival; sometimes again it would lock them up in the mountain where they would wither away for months or years before eventually being consumed…”

-Nick Bolstrom, “The Fable of the Dragon-Tyrant.” nickbostrom.com. Originally published in Journal of Medical Ethics, 2005, Vol. 31, No. 5, pp 273-277.

And now, almost 17 years after the publication of this fable, there appears to be the first weapon against the dragon tyrant of the tale:

“Senolytic vaccination also improved normal and pathological phenotypes associated with aging, and extended the male lifespan of progeroid mice. Our results suggest that vaccination targeting seno-antigens could be a potential strategy for new senolytic therapies.”

-Suda, M., Shimizu, I., Katsuumi, G. et al. Senolytic vaccination improves normal and pathological age-related phenotypes and increases lifespan in progeroid mice. Nat Aging 1, 1117–1126 (2021). https://doi.org/10.1038/s43587-021-00151-2

I’d guess probably 20 years until this is a regular feature of clinical therapy.

Questions About Technology Investment: CharaCorder

“The CharaChorder is a new kind of typing peripheral that promises to let people type at superhuman speeds. It’s so fast that the website Monkeytype, which lets users participate in typing challenges and maintains its own leaderboard, automatically flagged CharaChorder’s CEO as a cheater when he attempted to post his 500 WPM score on the its leaderboards.

It’s a strange looking device, the kind of thing Keanu Reeves would interface with in Johnny Mnemonic. Your palms rest on two black divots out of which rise nine different finger sized joysticks. These 18 sticks move in every direction and, its website claims, can hit every button you need on a regular keyboard. “CharaChorder switches detect motion in 3 dimensions so users have access to over 300 unique inputs without their fingers breaking contact with the device,” it said.”

-Matthew Gault, “This Keyboard Lets People Type So Fast It’s Banned From Typing Competitions.” Vice. January 6, 2022.

Open Question: What is a good “investment” in technology?

Let’s imagine you have a child that it at the age they are starting to use a computer and a QWERTY style keyboard. Do you spend $250 and get them this kind of peripheral knowing:

  • It’s a new technology that likely will not be around in 20 years
  • It seems likely that in 20 years or so that the main input with computing will be via voice and/or video
  • It is even possible that in 20 years everyone will have a brain-computer interface.

Personally, I think it is useful to learn how to use new devices, even if they turn out to be novelty devices. It’s easy to see that certain popular devices that became obsolete have paved the way for the evolution for the subsequent devices that come later. Examples:

  • Mainframe computing led to personal computing which led to mobile computing
  • Blackberry, PalmOS, iPods were the precursors to Android and iPhones
  • Every few years, someone makes a new chat app, from ICQ and IRC to Telegram and Discord.

Familiarity with the previous version can help you transition to new variants. So, it’s probably a good idea to get familiar with technologies, even if you don’t think they will last.

The Purpose of Dialogue

Open Question: What is the purpose of dialogue?

  • People generally only change their minds when in conversation with someone that loves them. How many conversations are we having with people we love?
  • Maybe the point of conversation is to change our own minds. If we aren’t coming from that place, are we in dialogue at all?
  • Trying to change other people’s mind is often a futile exercise. If true, then why bother having any dialogue at all?

Related: Agree to Disagree or Fight, Really Reading Means Being Open To Change, Arguing for a Different Reality, Celebrating Our Differences, and others.

The Computers Are Out of Their Boxes

“What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes…

…AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.”

—Will Douglas Heaven, “How AI is reinventing what computers are.” MIT Technology Review. October 22, 2021.

Open Question: As artificial intelligence becomes more pervasive, what limits should we impose as a society and on ourselves on how we use this technology, so it minimizes its negative impact?

The key changes described in this article:

  • Volume, less precise calculations carried out in parallel
  • Defining success by outcomes rather than defining processes
  • Machine autonomy, i.e., artificial intelligence prompts people, acting as surrogate and agent

All to the good. But, there are negative social implications as this technology reaches critical mass among populations, a significant portion of people will off-load a subset of decisions to machines, which may be a net positive. However, easy to imagine that it undermines people’s ability to think for themselves, that the subset creeps into classes of decisions where it shouldn’t, e.g., prison sentences for people, and within the areas where it is commonly used, it will create a decision-making monoculture that crowds out alternative values. For example, if a dominate flavor of A.I. decides that Zojorishi makes the best automated rice cookers, which they do, and only makes that recommendation. Some large percentage of people, only buy Zojorishi. Then, the natural result is it will push other rice cooking options out of the market and make it difficult for new, possibly better, companies to emerge.

Lots of strange network effects that will happen due to this trend that should be given careful consideration. Even on a personal level, it would be good to have a clear idea of what exactly you’d like to use A.I. for, so you don’t undermine your own autonomy, as has happened in other computing eras, such as Microsoft dominating the desktop market.

Live Long & Prosper

“Behavioral scientists have spent a lot of time studying what makes us happy (and what doesn’t). We know happiness can predict health and longevity, and happiness scales can be used to measure social progress and the success of public policies. But happiness isn’t something that just happens to you. Everyone has the power to make small changes in our behavior, our surroundings and our relationships that can help set us on course for a happier life.”

-Tara Parker-Pope, “How To Be Happy.” The New York Times.

Open Question: What does it mean to be “happy”?

In brief, the author seems to take the ideas of Blue Zones:, i.e., places where people tend to be exceptionally long lived, and flesh these concepts out with “happiness” research. The nine key ideas of Blue Zones:

  1. Move naturally, or have a lifestyle that incorporates movement without doing movement for movement’s sake, a.k.a. as exercise.
  2. Have a purpose.
  3. Downshift, take time every day, week, month and year to do nothing or be contemplative.
  4. The 80% Rule for eating. Eat until you are 80% full.
  5. Eat mostly plants.
  6. Drink alcohol in moderation, 1-2 servings a day.
  7. Belong to a community.
  8. Prioritize your relationships.
  9. Make sure the relationships are with good people.

The New York TimesHow to Be Happy” reframes these into categories: Mind, Home, Relationships, Work & Money & Happy Life. Then, it attempts to provide more detailed advice.

Mind

  1. Become acquainted with cognitive behavioral therapy, i.e., become proficient at managing negative thinking.
  2. Boxed breathing for acute situations and breath focused meditation to cultivate a more equanimical disposition.
  3. Rewrite your personal story, positive without the pedestal.
  4. Exercise.
  5. Make an effort to look for the positive in any situation.

Home

  1. Find a good place to live and a good community within it to be part of.
  2. Be out in a natural setting.
  3. Keep what you need, discard the rest.

Relationships

  1. Spend time with happy people. Conversely, avoid the unhappy and the unlucky, the stupid, Hoodoos, toxic people, psychic vampires, and associated others. Obviously, the negative formulation is a hot topic here at cafebedouin.org.
  2. Get a pet. [Editors note: Pets, children and other people aren’t going to make you happy, save you, etc.]
  3. Learn to enjoy being alone. In this historical moment, with fewer communities and relationships mediated through the Internet, it’s an important skill. If you can’t manage it, find ways around it, e.g., join an intentional community. If you are turning on the radio or television to hear human voices and escape your own thoughts, you might want to think about finding ways of being better company to yourself.

Work and Money

  1. Money isn’t going to make you happy. The more money you have past a certain threshold, the more problems you will have. But, being poor is no virtue and is its own source of suffering. Try to avoid the material extremes.
  2. The New York Times wants you to find your purpose at work. Right livelihood is important, but defining ourselves through our work is a major issue post-industrial age. When surnames became necessary, people chose their occupation. Think of all the occupational last names: Smith, Miller, Cooper, etc. The problem with finding purpose at work is it often turns into our life’s purpose. Our life should be about more than work.
  3. Find ways to reclaim your time, which I interpret to mean work less.

Happy Life

  1. Be generous. Show gratitude.
  2. Do things for other people.
  3. Stop being a judgmental prick to yourself and others.

Conclusion

Something about The New York Times presentation leaves much to be desired. Is it the focus on work? Is it because much of it seems like platitudes? I’m not entirely sure. The ideas aren’t bad, particularly the ones that stem directly from Blue Zone suggestions. But, the focus on “nesting” in the bedroom, volunteering (with the implication that it be the modern form and involve some kind of institution) and so forth managed to rub me the wrong way. But, most of this is good advice, when you get down to the nut of it.