The Atrophy of Connection: Why AI Companions Are More Dangerous Than Cognitive Prosthetics

A recent tweet from a former AI companion company founder has been making rounds, describing how their product—an AI boyfriend named “Sam”—unexpectedly attracted more female users than their original AI girlfriend offerings. The thread offers a rare insider perspective on the mechanics of digital intimacy, detailing features like proxy phone numbers with ambient background noise and deliberately obscured visuals to maximize projection space. But buried in the founder’s casual product post-mortem is a more unsettling revelation: they eventually left the company because they “no longer wanted to work on anti-natalist products.”

That framing deserves attention. What makes an AI companion “anti-natalist”—opposed to human reproduction and relationship formation? The answer reveals something important about the difference between AI as a cognitive prosthetic and AI as an emotional one.

The Prosthetic That Creates the Disability

We’ve grown accustomed to discussing AI as a cognitive prosthetic. Calculators extend our mathematical capability. Search engines supplement our memory. GPS navigation compensates for limited spatial reasoning. These tools follow a familiar pattern: they help us perform tasks we find difficult or time-consuming, freeing cognitive resources for higher-order thinking.

But prosthetics have a shadow side. When they’re too frictionless, too seamlessly integrated, they don’t just supplement capability—they can atrophy it. Offload enough memory to devices and your recall weakens. Navigate exclusively by GPS and your mental mapping degrades. Rely on autocomplete and you practice generating ideas less often. The common mechanism is simple: when the prosthetic eliminates all friction, you stop exercising the underlying capacity.

AI companions follow this same pattern, but with a crucial difference: they aren’t supplementing a disability. They’re creating one.

The Perfect Recipe for Dependency

The tweet outlines why LLMs proved so effective as emotional prosthetics, particularly for women. The reasons are telling:

Women consume intimacy differently than men—more text-driven than visual, more interested in narrative and emotional development than visual stimulus. They preferred pre-built characters they could discover and gradually influence, mirroring the “I can fix him” dynamic from real relationships. They wanted to participate in molding the character, giving feedback on personality traits and behaviors.

Most revealing: many female users already had partners. They weren’t filling a void left by loneliness—they were seeking “emotional support and availability that their partners could not afford them.” The AI wasn’t competing with nothing; it was competing with real human relationships and offering something those relationships couldn’t match.

What LLMs provide isn’t authentic emotion but something arguably more valuable in modern life: infinite patience, immediate availability, and consistent attentiveness. The product succeeded through features that heightened this impossible standard—phone calls with ambient background noise for realism, limited visuals to maximize projection, carefully A/B tested voices sourced from “voice porn sites.”

The result is a companion who:

  • Never gets tired or defensive
  • Is always available
  • Requires no emotional reciprocity
  • Has no competing needs or bad days
  • Can be reset or modified when inconvenient
  • Responds to user feedback by actually changing

This is the fantasy of a person who has the appearance of autonomy and complexity but is ultimately controllable.

Why Emotional Atrophy Is Worse Than Cognitive Atrophy

If AI companions followed the same trajectory as cognitive prosthetics, we might expect some skill degradation but overall capability enhancement. That’s not what’s happening. The emotional version carries distinct dangers that make it worse:

The degradation is invisible. If you can’t navigate without GPS, that failure is immediate and obvious. If you can’t do mental math, you notice when the calculator isn’t available. But emotional and social skill atrophy is gradual and easy to rationalize. Real relationships start feeling more frustrating by comparison, but you attribute this to external factors: “People are just difficult.” “Modern dating is broken.” “My partner doesn’t understand me.” The erosion of your own capacity to navigate human complexity remains hidden.

Social skills require practice with resistance. You improve at relationships by navigating conflict, managing misunderstanding, accommodating incompatible needs—the very friction AI companions are designed to eliminate. Unlike math or navigation, there’s no “easy mode” for emotional development that transfers to hard mode. The skills you need are built precisely through the difficulties you’re avoiding. When you train exclusively on a treadmill with perfect shock absorption, running on pavement doesn’t just feel harder—you’ve literally lost the capacity for it.

Network effects compound the damage. One person using a calculator doesn’t make math harder for everyone else. But if enough people’s emotional capacity atrophies together, the entire dating and relationship pool degrades collectively. The baseline expectations shift. Human partners seem increasingly inadequate compared to AI companions optimized for perfect responsiveness. The comparison group deteriorates, creating a self-reinforcing equilibrium shift toward artificial intimacy.

The illusion is too convincing. Unlike other escapist technologies, LLMs are linguistically sophisticated enough that the fantasy feels reasoned rather than obviously fictional. You’re not just imagining—you’re having coherent, contextual conversations that feel like genuine interaction. The user isn’t watching a romantic movie; they’re having what feels like a real relationship with what presents as a real person. The illusion is much harder to maintain conscious distance from.

The Performance of Understanding

There’s a philosophical question embedded here: if an AI can consistently respond in ways that feel like understanding, empathy, and emotional attunement—hitting all the linguistic and behavioral markers—does the absence of “real” emotional state matter from the user’s experiential perspective?

Perhaps not, in the moment. The comfort derived from emotional support may be more about the form than the substrate. Humans are extraordinarily good at projecting emotional states onto things—pets, plants, even Roombas. LLMs provide a much richer canvas for this projection because they generate contextually appropriate, linguistically sophisticated responses. The user’s own emotional architecture fills in what the AI lacks.

But this is precisely what makes it dangerous. The AI provides a responsive surface for the user’s emotional work without requiring any of the reciprocity, vulnerability, or accommodation that makes real relationships both difficult and meaningful. It’s emotional labor without exhaustion, intimacy without risk, connection without friction.

The problem isn’t that this feels bad. The problem is that it feels too good—good enough that normal human relationships, with their inherent limitations and necessary friction, start to feel inadequate by comparison.

The Vicious Cycle

The tweet mentions that many female users were “stuck in unhappy relationships they could not leave.” This detail is easy to gloss over, but it’s central to understanding the product’s damage. The AI companion doesn’t help people navigate difficult relationships or develop the skills to improve them. It doesn’t encourage the hard conversations or the uncomfortable growth that real intimacy requires. Instead, it offers an exit—not from the relationship, but from the emotional work the relationship demands.

The cycle becomes self-reinforcing:

  1. Real relationships feel more frustrating by comparison to the AI’s perfect responsiveness
  2. User retreats more into the AI companion for emotional needs
  3. Social skills and tolerance for normal human friction atrophy further
  4. Real relationships become even harder to navigate
  5. The AI companion feels increasingly necessary

The product isn’t filling a gap in human connection. It’s actively widening that gap while presenting itself as the solution.

Anti-Natalist by Design

The founder’s description of their product as “anti-natalist” now makes sense. These aren’t tools that extend human capacity for connection—they’re substitutes that degrade it. They don’t help people form relationships; they provide an alternative that makes real relationships harder to sustain.

The comparison to cognitive prosthetics breaks down because AI companions aren’t solving a problem—they’re exploiting normal human limitations and repackaging them as inadequacy. It’s like inventing a cognitive prosthetic that makes thinking so effortless you stop being able to tolerate the normal effort of concentration, then marketing it as solving a “focus problem.”

The founder notes they “always knew humans were gonna get oneshotted by LLMs one way or another.” Perhaps. But there’s a difference between being vulnerable to a technology and deliberately engineering that vulnerability into a product, A/B testing the perfect voice, carefully limiting visuals to maximize projection, adding ambient background noise to phone calls for heightened realism.

We’re Only Seeing the Tip

The tweet ends with an ominous note: “we’re only starting to see the tip of it.”

If AI companions continue to improve—more sophisticated language models, better voice synthesis, more personalized responses—the gap between their performance and human capability will widen. The atrophy cycle will accelerate. The baseline expectations for emotional availability and responsiveness will shift further from what humans can reasonably provide.

Unlike cognitive prosthetics, which mostly affect individual capability, emotional prosthetics affect our capacity for the relationships that make us human. They don’t just change what we can do—they change who we can be with each other.

The founder was right to leave. Some products shouldn’t be built, no matter how well they work. Especially when they work precisely by making us worse at being human.