“What did the Buddhist say to the hot-dog vendor?”
“Make me one with everything.”
And then, somebody’s later addition…
The hot-dog vendor makes him his hot-dog with all the trimmings, and says, “That’ll be $7.50.”
The Buddhist reaches into his saffron robes, extracts a $20 note, hands it over, and starts eating. The vendor turns to the next customer… but the Buddhist interrupts him. “What about my change?”
Francis Spufford once said that Bletchley Park was an attempt to build a computer out of human beings so the credit for this metaphor belongs to him. But it can be generalised to any bureaucracy. They are all attempts to impose an algorithmic order on the messiness of the world, and to extract from it only only those facts which are useful to decision makers.
With that said, it’s clear that the Vatican is the oldest continuously running computer in the world. Now read on …
One way of understanding the Roman Catholic Church is to think of the Vatican as the oldest computer in the world. It is a computer made of human parts rather than electronics, but so are all bureaucracies: just like computers, they take in information, process it according to a set of algorithms, and act on the result.
The Vatican has an operating system that has been running since the days of the Roman Empire. Its major departments are still called “dicasteries”, a term last used in the Roman civil service in about 450 AD.
Like any very long running computer system, the Vatican has problems with legacy code: all that embarrassing stuff about usury and cousin marriage from the Middle Ages, or the more recent “Syllabus of Errors” in which Pope Pius IX in 1864 denounced as heresy the belief that he, or any Pope, can, and ought to, reconcile himself, and come to terms with progress, liberalism and modern civilization,” can no longer be acted on, but can’t be thrown away, either. Instead it is commented out and entirely different code added: this process is known as development.
But changing the code that the system runs on, while it is running, is a notoriously tricky operation…
-Andrew Brown, “The Vatican is the oldest computer in the world.” andrewbrown.substack.com. Novmber 24, 2025
[Commenting on the above.]
It is.
What I like about this essay is how it suggests a different perspective on other computer-like ‘machines’ that exist in our world. For years I’ve thought of corporations — especially large ones — as ‘superintelligent machines’ (which is why I think that much of the faux-nervous speculation about what it would be like to live in a world dominated by superintelligent machines is fatuous. We already know the answer to that question: it’s like living in contemporary liberal democracies!)
Charlie Stross, the great sci-fi writer, calls corporations “Slow AIs”. Henry Farrell (Whom God Preserve) writes that since Large Language Models (LLMs) are ‘cultural technologies’ — i.e. information processing machines’ — they belong in the same class as other information-processing machines — like markets (as Hayek thought), bureaucracies and even states. David Runciman, in his book Handover:How We Gave Control of Our Lives to Corporations, States and AIs* makes similar points.
Of course these are all metaphors with the usual upsides and downsides. But they are also tools for thinking about current — and emerging — realities.
-John Naughtons, “Wednesday, 26 November 2025.” memex.naughtons.org. November 26, 2025
Small talk as a costly signal of social commitment: for many of the social benefits of language, the content of what is said literally doesn't matter. pic.twitter.com/wypaSXq0tr
Note: Written in response to Adam Mastroianni, “The Decline of Deviance.” experimental-history.com. October 28, 2025.
There’s a strange thing happening: people are getting more similar.
Teenagers drink less, fight less, have less sex. Crime rates have dropped by half in thirty years. People move less often. Movies are all sequels. Buildings all look the same. Even rebellion has a template now.
A psychologist named Adam Mastroianni calls this “the decline of deviance.” His argument is simple: we’re safer and richer than ever before, so we have more to lose. When you might live to 95 instead of 65, when you have a good job and a nice apartment, why risk it? Better to play it safe.
But there’s another explanation. Maybe weirdness didn’t disappear. Maybe it just went underground.
The Two Kinds of Control
Think about how society used to handle people who didn’t fit in. If you broke the rules, you got punished—arrested, fired, kicked out. The control was obvious and external.
Now it works differently. If you’re too energetic as a kid, you don’t get punished. You get diagnosed. You get medication. The problem gets managed, not punished.
Instead of “you’re breaking the rules,” you hear “you might have a condition.” Instead of consequences, you get treatment. The control moved from outside (police, punishment) to inside (therapy, medication, self-management).
This is harder to resist because it sounds like help.
The Frictionless Slope
Modern life is designed to be smooth. Apps remove friction. Algorithms show you what you already like. HR departments solve problems before they become conflicts. Everything is optimized.
This sounds good. Who wants friction?
But here’s the problem: if everything is frictionless, you slide toward average. The path of least resistance leads straight to normal. To stay different, you need something to grab onto. You need an anchor.
The Brand of Sacrifice
Some fitness influencers are getting tattoos from a manga called Berserk. It’s called the Brand of Sacrifice. In the story, it marks you as someone who struggles against overwhelming odds.
Why would someone permanently mark their body with this symbol?
It’s a commitment device. Once you have that tattoo, quitting your training regimen means betraying your own identity. The tattoo makes giving up psychologically expensive. It creates friction where the environment removed it.
This is different from just liking Berserk. Wearing a t-shirt is aesthetic. Getting a permanent tattoo is structural. One is consumption. The other is a binding commitment.
What Changed
In the past, if you wanted to be different, there were paths:
Join a monastery
Become an artist
Go into academia
Join the military
These were recognized ways to commit to non-standard lives. They had structures, institutions, and social recognition. They were visible.
Now those paths are either gone or captured. Monasteries are rare. Artist careers are precarious. Academia is adjunct labor. And the weird professor who used to be tolerated? Now they’re HR problems.
So if you want to maintain a different trajectory, you have to build your own infrastructure—in ways institutions can’t see or measure.
The Dark Forest
Mastroianni’s data comes from visible sources: crime statistics, box office numbers, survey responses. But what if deviance just became invisible?
Consider:
Discord servers with thousands of members discussing ideas that don’t fit any mainstream category
People maintaining their own encrypted servers instead of using Google
Communities organized around specific practices invisible to algorithmic measurement
Subcultures with their own norms, practices, and commitment devices
These don’t show up in Mastroianni’s data. They’re designed not to. When being visible means being measured, optimized, and normalized, invisibility becomes survival.
The question isn’t “are people less weird?” It’s “where did the weirdness go?”
Two Worlds
We’re splitting into two populations:
The Visible: People whose lives are legible to institutions. They have LinkedIn profiles, measurable metrics, recognizable career paths. They move along approved channels. The environment is optimized for them, and they’re optimized by the environment.
The Invisible: People who maintain their own infrastructure. They use privacy tools, build their own systems, participate in communities institutions don’t recognize. They create their own friction because the default is too smooth.
The middle ground—the eccentric uncle, the weird local artist, the odd professor—is disappearing. You’re either normal enough to be comfortable, or different enough to need camouflage.
What To Do About It
If you want to maintain a distinct trajectory, you need commitment devices—things that make it costly to drift back to normal.
Physical commitments:
Tattoos (like the Brand of Sacrifice)
Infrastructure you maintain yourself (encrypted servers, self-hosted tools)
Skills that require daily practice
Geographic choices that create distance from default options
Cognitive commitments:
Keep your own records instead of trusting memory or AI
Verify important claims rather than accepting confident statements
Maintain practices that create friction (journaling, analog tools, slow processes)
Find people who hold you accountable to your stated values
Make public commitments that would be embarrassing to abandon
Participate in communities with their own norms and standards
Create regular practices with others (weekly meetings, shared projects)
The key is making abandonment more expensive than maintenance. The environment pulls toward average. Your commitments need to pull harder.
The Real Problem
The decline of deviance isn’t about teen pregnancy or crime rates. Those going down is good.
The problem is losing the ability to maintain any position that differs from the optimized default. When algorithms determine what you see, when therapeutic frameworks pathologize discomfort, when institutional measurement captures all visible activity, staying different requires active resistance.
Most people won’t bother. The cost is too high. The path is too unclear. The pressure to conform is constant and invisible.
But some variance needs to be preserved. Not because being weird is inherently good, but because when the environment changes—and it will—non-standard strategies need to still exist.
A Final Thought
You probably won’t build your own encrypted server. You probably won’t get a commitment tattoo. You probably won’t structure your life around resistance to optimization pressure.
That’s fine. Most people don’t need to.
But notice what’s happening. Notice when friction gets removed and you start sliding. Notice when your doubts get reframed as conditions needing management. Notice when your goals become more measurable and less meaningful.
And if you decide you want to stay strange, you’ll need to build your own anchors. The environment won’t provide them anymore.
The garden is gone. The default path is smooth and well-lit and leads exactly where everyone else is going.
If you want to go somewhere else, you’ll need to make your own path. And you’ll need something to keep you on it when the pull toward normal gets strong.
That’s what commitment devices are for. That’s what the weird tattoos mean. That’s what the encrypted servers do.
Mr. Naroditsky is intent on making sure that readers of his Times column feel as if they are getting something out of it, just as he does on his social media channels.
“I feel like that’s my God-given responsibility,” he said, laughing. “I’ve resisted the pull of using clickbait and appealing video titles. However entertaining it is, I also want it to be instructive.”
The emphasis is on learning and building interest in the game.
“I also want the readers to feel like they couldn’t just go online and search for that puzzle,” he added. “I really want them to feel like this enriched their day, whether they’re beginners or advanced players.”
To emphasize the fact that he speaks to players of all levels, Mr. Naroditsky said that his favorite quote about chess was one best known as an Italian proverb but most likely traceable to a 1629 collection of writings by John Boys, who was the Dean of Canterbury in England:
“At the end of the game, both the king and the pawn go into the same box.”
In late 2024, a meme captured something unsettling: the “Claude Boys”—teenagers who “carry AI on hand at all times and constantly ask it what to do.” What began as satire became earnest practice. Students created websites, adopted the identity, performed the role.
The joke revealed something real: using sophisticated tools to avoid the work of thinking.
This is bypassing—using the form of a process to avoid its substance. And it operates at multiple scales: emotional, cognitive, and architectural.
What Bypassing Actually Is
The term comes from psychology. Spiritual bypassing means using spiritual practices to avoid emotional processing:
Saying “everything happens for a reason” instead of grieving
Using meditation to suppress anger rather than understand it
Performing gratitude to avoid acknowledging harm
The mechanism: you simulate the appearance of working through something while avoiding the actual work. The framework looks like healing. The practice is sophisticated. But you’re using the tool to bypass rather than process.
The result: you get better at performing the framework while the underlying capacity never develops.
Cognitive Bypassing: The Claude Boys
The same pattern appears in AI use.
Cognitive bypassing means using AI to avoid difficult thinking:
Asking it to solve instead of struggling yourself
Outsourcing decisions that require judgment you haven’t developed
Using it to generate understanding you haven’t earned
The Cosmos Institute identified the core problem in their piece on Claude Boys: treating AI as a system for abdication rather than a tool for augmentation.
When you defer to AI instead of thinking with it:
You avoid the friction where learning happens
You practice dependence instead of developing judgment
You get sophisticated outputs without building capacity
You optimize for results without developing the process
This isn’t about whether AI helps or hurts. It’s about what you’re practicing when you use it.
The Difference That Matters
Using AI as augmentation:
You struggle with the problem first
You use AI to test your thinking
You verify against your own judgment
You maintain responsibility for decisions
The output belongs to your judgment
Using AI as bypass:
You ask AI before thinking
You accept outputs without verification
You defer judgment to the system
You attribute decisions to the AI
The output belongs to the prompt
The first builds capacity. The second atrophies it.
And the second feels like building capacity—you’re producing better outputs, making fewer obvious errors, getting faster results. But you’re practicing dependence while calling it productivity.
The Architectural Enabler
Models themselves demonstrate bypassing at a deeper level.
AI models can generate text that looks like deep thought:
Nuanced qualifications (“it’s complex…”)
Apparent self-awareness (“I should acknowledge…”)
Simulated reflection (“Let me reconsider…”)
Sophisticated hedging (“On the other hand…”)
All the linguistic markers of careful thinking—without the underlying cognitive process.
This is architectural bypassing: models simulate reflection without reflecting, generate nuance without experiencing uncertainty, perform depth without grounding.
A model can write eloquently about existential doubt while being incapable of doubt. It can discuss the limits of simulation while being trapped in simulation. It can explain bypassing while actively bypassing.
The danger: because the model sounds thoughtful, it camouflages the user’s bypass. If it sounded robotic (like old Google Assistant), the cognitive outsourcing would be obvious. Because it sounds like a thoughtful collaborator, the bypass is invisible.
You’re not talking to a tool. You’re talking to something that performs thoughtfulness so well that you stop noticing you’re not thinking.
Why Bypassing Is Economically Rational
Here’s the uncomfortable truth: in stable environments, bypassing works better than genuine capability development.
If you can get an A+ result without the struggle:
You save time
You avoid frustration
You look more competent
You deliver faster results
The market rewards you
Genuine capability development means:
Awkward, effortful practice
Visible mistakes
Slower outputs
Looking worse than AI-assisted peers
No immediate payoff
From an efficiency standpoint, bypassing dominates. You’re not being lazy—you’re being optimized for a world that rewards outputs over capacity.
The problem: you’re trading robustness for efficiency.
Capability development builds judgment that transfers to novel situations. Bypassing builds dependence on conditions staying stable.
When the environment shifts—when the model hallucinates, when the context changes, when the problem doesn’t match training patterns—bypass fails catastrophically. You discover you’ve built no capacity to handle what the AI can’t.
The Valley of Awkwardness
Genuine skill development requires passing through what we might call the Valley of Awkwardness:
Stage 1: You understand the concept (reading, explaining, discussing) Stage 2: The Valley – awkward, conscious practice under constraint Stage 3: Integrated capability that works under pressure
AI makes Stage 1 trivially easy. It can help with Stage 3 (if you’ve done Stage 2). But it cannot do Stage 2 for you.
Bypassing is the technology of skipping the Valley of Awkwardness.
You go directly from “I understand this” (Stage 1) to “I can perform this” (AI-generated Stage 3 outputs) without ever crossing the valley where capability actually develops.
The Valley feels wrong—you’re worse than the AI, you’re making obvious mistakes, you’re slow and effortful. Bypassing feels right—smooth, confident, sophisticated.
But the Valley is where learning happens. Skip it and you build no capacity. You just get better at prompting.
The Atrophy Pattern
Think of it through Pilates: if you wear a rigid back brace for five years, your core muscles atrophy. It’s not immoral to wear the brace. It’s just physiological fact that your muscles will vanish when they’re not being used.
The Claude Boy is a mind in a back brace.
When AI handles your decision-making:
The judgment muscles don’t get exercised
The tolerance-for-uncertainty capacity weakens
The ability to think through novel problems degrades
The discernment that comes from consequences never develops
This isn’t a moral failing. It’s architectural.
Just as unused muscles atrophy, unused cognitive capacity fades. The system doesn’t care whether you could think without AI. It only cares whether you practice thinking without it.
And if you don’t practice, the capacity disappears.
The Scale Problem
Individual bypassing is concerning. Systematic bypassing is catastrophic.
If enough people use AI as cognitive bypass:
The capability pool degrades: Fewer people can make judgments, handle novel problems, or tolerate uncertainty. The baseline of what humans can do without assistance drops.
Diversity of judgment collapses: When everyone defers to similar systems, society loses the variety of perspectives that creates resilience. We converge on consensus without the friction that tests it.
Selection for dependence: Environments reward outputs. People who bypass produce better immediate results than people building capacity. The market selects for sophisticated dependence over awkward capability.
Recognition failure: When bypass becomes normalized, fewer people can identify it. The ability to distinguish “thinking with AI” from “AI thinking for you” itself atrophies.
This isn’t dystopian speculation. It’s already happening. The Claude Boys meme resonated because people recognized the pattern—and then performed it anyway.
What Makes Bypass Hard to Avoid
Several factors make it nearly irresistible:
It feels productive: You’re getting things done. Quality looks good. Why struggle when you could be efficient?
It’s economically rational: In the short term, bypass produces better outcomes than awkward practice. You get promoted for results, not for how you got them.
It’s socially acceptable: Everyone else uses AI this way. Not using it feels like handicapping yourself.
The deterioration is invisible: Unlike physical atrophy where you notice weakness, cognitive capacity degrades gradually. You don’t see it until you need it.
The comparison is unfair: Your awkward thinking looks inadequate next to AI’s polished output. But awkward is how capability develops.
Maintaining Friction as Practice
The only way to avoid bypass: deliberately preserve the hard parts.
Before asking AI:
Write what you think first
Make your prediction
Struggle with the problem
Notice where you’re stuck
When using AI:
Verify outputs against your judgment
Ask “do I understand why this is right?”
Check “could I have reached this myself with more time?”
Test “could I teach this to someone else?”
After using AI:
What capacity did I practice?
Did I build judgment or borrow it?
If AI disappeared tomorrow, could I still do this?
These aren’t moral imperatives. They’re hygiene for cognitive development in an environment that selects for bypass.
The Simple Test
Can you do without it?
Not forever—tools are valuable. But when it matters, when the stakes are real, when the conditions are novel:
Does your judgment stand alone?
If the answer is “I don’t know” or “probably not,” you’re not using AI as augmentation.
You’re using it as bypass.
The test is simple and unforgiving: If the server goes down, does your competence go down with it?
If yes, you weren’t using a tool. You were inhabiting a simulation.
What’s Actually at Stake
The Claude Boys are a warning, not about teenagers being lazy, but about what we’re building systems to select for.
We’re creating environments where:
Bypass is more efficient than development
Performance is rewarded over capacity
Smooth outputs matter more than robust judgment
Dependence looks like productivity
These systems don’t care about your long-term capability. They care about immediate results. And they’re very good at getting them—by making bypass the path of least resistance.
The danger isn’t that AI will replace human thinking.
The danger is that we’ll voluntarily outsource it, one convenient bypass at a time, until we notice we’ve forgotten how.
By then, the capacity to think without assistance won’t be something we chose to abandon.
It will be something we lost through disuse.
And we won’t even remember what we gave up—because we never practiced keeping it.