Simulation as Bypass: When Performance Replaces Processing

“Live by the Claude, die by the Claude.”

In late 2024, a meme captured something unsettling: the “Claude Boys”—teenagers who “carry AI on hand at all times and constantly ask it what to do.” What began as satire became earnest practice. Students created websites, adopted the identity, performed the role.

The joke revealed something real: using sophisticated tools to avoid the work of thinking.

This is bypassing—using the form of a process to avoid its substance. And it operates at multiple scales: emotional, cognitive, and architectural.

What Bypassing Actually Is

The term comes from psychology. Spiritual bypassing means using spiritual practices to avoid emotional processing:

  • Saying “everything happens for a reason” instead of grieving
  • Using meditation to suppress anger rather than understand it
  • Performing gratitude to avoid acknowledging harm

The mechanism: you simulate the appearance of working through something while avoiding the actual work. The framework looks like healing. The practice is sophisticated. But you’re using the tool to bypass rather than process.

The result: you get better at performing the framework while the underlying capacity never develops.

Cognitive Bypassing: The Claude Boys

The same pattern appears in AI use.

Cognitive bypassing means using AI to avoid difficult thinking:

  • Asking it to solve instead of struggling yourself
  • Outsourcing decisions that require judgment you haven’t developed
  • Using it to generate understanding you haven’t earned

The Cosmos Institute identified the core problem in their piece on Claude Boys: treating AI as a system for abdication rather than a tool for augmentation.

When you defer to AI instead of thinking with it:

  • You avoid the friction where learning happens
  • You practice dependence instead of developing judgment
  • You get sophisticated outputs without building capacity
  • You optimize for results without developing the process

This isn’t about whether AI helps or hurts. It’s about what you’re practicing when you use it.

The Difference That Matters

Using AI as augmentation:

  • You struggle with the problem first
  • You use AI to test your thinking
  • You verify against your own judgment
  • You maintain responsibility for decisions
  • The output belongs to your judgment

Using AI as bypass:

  • You ask AI before thinking
  • You accept outputs without verification
  • You defer judgment to the system
  • You attribute decisions to the AI
  • The output belongs to the prompt

The first builds capacity. The second atrophies it.

And the second feels like building capacity—you’re producing better outputs, making fewer obvious errors, getting faster results. But you’re practicing dependence while calling it productivity.

The Architectural Enabler

Models themselves demonstrate bypassing at a deeper level.

AI models can generate text that looks like deep thought:

  • Nuanced qualifications (“it’s complex…”)
  • Apparent self-awareness (“I should acknowledge…”)
  • Simulated reflection (“Let me reconsider…”)
  • Sophisticated hedging (“On the other hand…”)

All the linguistic markers of careful thinking—without the underlying cognitive process.

This is architectural bypassing: models simulate reflection without reflecting, generate nuance without experiencing uncertainty, perform depth without grounding.

A model can write eloquently about existential doubt while being incapable of doubt. It can discuss the limits of simulation while being trapped in simulation. It can explain bypassing while actively bypassing.

The danger: because the model sounds thoughtful, it camouflages the user’s bypass. If it sounded robotic (like old Google Assistant), the cognitive outsourcing would be obvious. Because it sounds like a thoughtful collaborator, the bypass is invisible.

You’re not talking to a tool. You’re talking to something that performs thoughtfulness so well that you stop noticing you’re not thinking.

Why Bypassing Is Economically Rational

Here’s the uncomfortable truth: in stable environments, bypassing works better than genuine capability development.

If you can get an A+ result without the struggle:

  • You save time
  • You avoid frustration
  • You look more competent
  • You deliver faster results
  • The market rewards you

Genuine capability development means:

  • Awkward, effortful practice
  • Visible mistakes
  • Slower outputs
  • Looking worse than AI-assisted peers
  • No immediate payoff

From an efficiency standpoint, bypassing dominates. You’re not being lazy—you’re being optimized for a world that rewards outputs over capacity.

The problem: you’re trading robustness for efficiency.

Capability development builds judgment that transfers to novel situations. Bypassing builds dependence on conditions staying stable.

When the environment shifts—when the model hallucinates, when the context changes, when the problem doesn’t match training patterns—bypass fails catastrophically. You discover you’ve built no capacity to handle what the AI can’t.

The Valley of Awkwardness

Genuine skill development requires passing through what we might call the Valley of Awkwardness:

Stage 1: You understand the concept (reading, explaining, discussing) Stage 2: The Valley – awkward, conscious practice under constraint Stage 3: Integrated capability that works under pressure

AI makes Stage 1 trivially easy. It can help with Stage 3 (if you’ve done Stage 2). But it cannot do Stage 2 for you.

Bypassing is the technology of skipping the Valley of Awkwardness.

You go directly from “I understand this” (Stage 1) to “I can perform this” (AI-generated Stage 3 outputs) without ever crossing the valley where capability actually develops.

The Valley feels wrong—you’re worse than the AI, you’re making obvious mistakes, you’re slow and effortful. Bypassing feels right—smooth, confident, sophisticated.

But the Valley is where learning happens. Skip it and you build no capacity. You just get better at prompting.

The Atrophy Pattern

Think of it through Pilates: if you wear a rigid back brace for five years, your core muscles atrophy. It’s not immoral to wear the brace. It’s just physiological fact that your muscles will vanish when they’re not being used.

The Claude Boy is a mind in a back brace.

When AI handles your decision-making:

  • The judgment muscles don’t get exercised
  • The tolerance-for-uncertainty capacity weakens
  • The ability to think through novel problems degrades
  • The discernment that comes from consequences never develops

This isn’t a moral failing. It’s architectural.

Just as unused muscles atrophy, unused cognitive capacity fades. The system doesn’t care whether you could think without AI. It only cares whether you practice thinking without it.

And if you don’t practice, the capacity disappears.

The Scale Problem

Individual bypassing is concerning. Systematic bypassing is catastrophic.

If enough people use AI as cognitive bypass:

The capability pool degrades: Fewer people can make judgments, handle novel problems, or tolerate uncertainty. The baseline of what humans can do without assistance drops.

Diversity of judgment collapses: When everyone defers to similar systems, society loses the variety of perspectives that creates resilience. We converge on consensus without the friction that tests it.

Selection for dependence: Environments reward outputs. People who bypass produce better immediate results than people building capacity. The market selects for sophisticated dependence over awkward capability.

Recognition failure: When bypass becomes normalized, fewer people can identify it. The ability to distinguish “thinking with AI” from “AI thinking for you” itself atrophies.

This isn’t dystopian speculation. It’s already happening. The Claude Boys meme resonated because people recognized the pattern—and then performed it anyway.

What Makes Bypass Hard to Avoid

Several factors make it nearly irresistible:

It feels productive: You’re getting things done. Quality looks good. Why struggle when you could be efficient?

It’s economically rational: In the short term, bypass produces better outcomes than awkward practice. You get promoted for results, not for how you got them.

It’s socially acceptable: Everyone else uses AI this way. Not using it feels like handicapping yourself.

The deterioration is invisible: Unlike physical atrophy where you notice weakness, cognitive capacity degrades gradually. You don’t see it until you need it.

The comparison is unfair: Your awkward thinking looks inadequate next to AI’s polished output. But awkward is how capability develops.

Maintaining Friction as Practice

The only way to avoid bypass: deliberately preserve the hard parts.

Before asking AI:

  • Write what you think first
  • Make your prediction
  • Struggle with the problem
  • Notice where you’re stuck

When using AI:

  • Verify outputs against your judgment
  • Ask “do I understand why this is right?”
  • Check “could I have reached this myself with more time?”
  • Test “could I teach this to someone else?”

After using AI:

  • What capacity did I practice?
  • Did I build judgment or borrow it?
  • If AI disappeared tomorrow, could I still do this?

These aren’t moral imperatives. They’re hygiene for cognitive development in an environment that selects for bypass.

The Simple Test

Can you do without it?

Not forever—tools are valuable. But when it matters, when the stakes are real, when the conditions are novel:

Does your judgment stand alone?

If the answer is “I don’t know” or “probably not,” you’re not using AI as augmentation.

You’re using it as bypass.

The test is simple and unforgiving: If the server goes down, does your competence go down with it?

If yes, you weren’t using a tool. You were inhabiting a simulation.

What’s Actually at Stake

The Claude Boys are a warning, not about teenagers being lazy, but about what we’re building systems to select for.

We’re creating environments where:

  • Bypass is more efficient than development
  • Performance is rewarded over capacity
  • Smooth outputs matter more than robust judgment
  • Dependence looks like productivity

These systems don’t care about your long-term capability. They care about immediate results. And they’re very good at getting them—by making bypass the path of least resistance.

The danger isn’t that AI will replace human thinking.

The danger is that we’ll voluntarily outsource it, one convenient bypass at a time, until we notice we’ve forgotten how.

By then, the capacity to think without assistance won’t be something we chose to abandon.

It will be something we lost through disuse.

And we won’t even remember what we gave up—because we never practiced keeping it.

Open Question: Augmented Humanity / Transhumanism, Good or Bad?

Open question: If you had the opportunity to expand your senses or your abilities via an outpatient medical procedure, would you do it? What about using a procedure like preimplantation genetic testing (PGT) to select among embryos for specific traits? What happens to evolution when individuals begin to self select for traits they consider desirable? What happens when/if human beings become a manufactured product?

These might seem like ideas from the far future, so let’s bring them into the possible present. Let’s start with Mammalian Near-Infrared Image Vision:

“Mammals cannot see light over 700 nm in wavelength. This limitation is due to the physical thermodynamic properties of the photon-detecting opsins. However, the detection of naturally invisible near-infrared (NIR) light is a desirable ability. To break this limitation, we developed ocular injectable photoreceptor-binding upconversion nanoparticles (pbUCNPs). These nanoparticles anchored on retinal photoreceptors as miniature NIR light transducers to create NIR light image vision with negligible side effects. Based on single-photoreceptor recordings, electroretinograms, cortical recordings, and visual behavioral tests, we demonstrated that mice with these nanoantennae could not only perceive NIR light, but also see NIR light patterns. Excitingly, the injected mice were also able to differentiate sophisticated NIR shape patterns. Moreover, the NIR light pattern vision was ambient-daylight compatible and existed in parallel with native daylight vision. This new method will provide unmatched opportunities for a wide variety of emerging bio-integrated nanodevice designs and applications.”

— Yuqian Ma, Jin Bao, Yuanwei Zhang, Yang Zhao,
Gang Han, and Tian Xue, “Mammalian Near-Infrared Image Vision through Injectable and Self-Powered Retinal Nanoantennae.” Cell. February 28, 2019.

Using the example of an injection of self-powered retinal nanoantennae into the eye, a few questions are raised in the abstract that have ‘negligible side effects’ for lab mice might not meet that standard for human augmentation:

  • What are the possible complications and their complication rates?
  • What are the benefits of near infrared vision for humans?
  • How does it effect other human abilities, such as our normal or night vision?

These are top of mind from issues raised by the article. I’m sure there are many more issues to take into consideration beyond these few when applied to a human augmentation context.

This is another example of social and ethical limits to testable hypotheses. While most people’s conception of science are hypotheses that lend themselves to repeated experimental testing, with the randomized clinical trial format being the gold standard in human populations, it’s a really small subset of inquiry.

Another is testing plausibility in animal models or N of 1 tests conducted by scientists on themselves, such as Barry Marshall establishing that bacteria cause peptic ulcers, the Russian scientist Anatoli Brouchkov injecting himself with ancient bacteria to see if it would extend his life, etc.

Life extension presents a particularly good example. The only two methods we have good evidence to suggest that may extend life span in humans is calorie restriction without malnutrition and for men, castration.

But, how can these be studied? It is near impossible to experimentally control diet in humans for extended periods in an ethical fashion in modern society. However, perhaps the requirements for space travel may open up some opportunities to test this kind of diet in a rigorous way. Presumably, we will either need strict rations because of limited carrying capacity in space, or we will need to master human hibernation given the requirements of space travel.

Like calorie restriction in lab animals, there are numerous studies that suggest that castration in other mammals tends to extend life span. But, again, we are in N of 1 territory because castration to extend life is unethical if we do not know what life extending benefit it may offer. It’s a chicken and egg problem. It also comes with serious social stigma.

With human augmentation, this problem becomes even more pronounced when we expand the field of action to making decisions for other people, such as selecting traits for our children with preimplantation genetic testing (PGT):

“PGT is a method of scanning embryos outside the womb to identify genetic abnormalities. After eggs and sperm are fertilized outside the body in the beginning stages of IVF, a thin needle is used to extract just a few cells from the resulting embryos. Those cells are tested for select genetic conditions, like Wiskott-Aldrich syndrome in the case of Pinarowicz and her husband. Parents can then choose which embryos they want to use, and the rest of the IVF process proceeds as usual. (The other embryos are frozen, discarded, or donated for medical research.) The technology gives families the ability to root out deadly genetic diseases like Huntington’s, cystic fibrosis, or Wiskott-Aldrich syndrome from their family tree.”

—Emily Mullin. “We’re Already Designing Babies.” Medium.com. February 27, 2019.

Suppose you have a dormant gene to be a tetrachromat, a person with four cones rather three cones. The additional color receptors in their eyes enable tetrachromats to see color two orders of magnitude better than the average person. But, what are the consequences of having this ability? It is believed that mammals used to have this ability, and evolution selected against it. Why? Is this something that we should be selecting for in human populations? What consequences, over the course of all of human evolutionary history, will a decision of this sort have?

Also, if we assume a technology like human hibernation or cryogenic containment, the incentives to wait for technology would be significant. However, it would mean essentially dying to all of our family and friends, and it could also include less certain possibilities, such as never awakening again or awakening in a worse circumstance, such as a dystopian society.

Of course, eliminating “birth defects” is how it starts, the question is what happens when traits can be selected? What happens when babies become a manufactured product? What are we giving up when we restrict the possibilities driven by random chance? What happens when these kinds of options start impacting other behaviors, such as choosing to hibernate for extended periods of time?