The Competence Trap: Why Being Good at Many Things Makes Self-Assessment Nearly Impossible

We all know the type who announces their skills on social media. “Crisis management is one of my deepest competencies,” they tweet, while actively demonstrating the opposite. The irony is obvious to everyone but them. But recognizing others’ inflated self-assessments is easy. The harder question is: how do we avoid the same trap ourselves?

The answer is more difficult than it appears, especially for a particular kind of person: the competent generalist.

The Performance Problem

Start with a simple principle: competent performers demonstrate, they don’t declare.

When someone publicly announces their mastery of precisely the skills that would make them impressive—emotional regulation, crisis management, “reading the room”—you should be skeptical. Not because people are always lying, but because of what the declaration itself reveals.

Someone genuinely skilled at engagement control would simply do it in the moment rather than announce their possession of the skill. The announcement often represents the performance they’re capable of: the appearance of the thing rather than the thing itself.

Consider what the person tweeting about their “engagement control” is actually demonstrating: high reactivity to social dynamics (tracking what “everyone else” is doing), investment in being perceived a certain way (calm/analytical/superior), and lack of self-awareness about performing the claim publicly. These are markers of high social-emotional activation, not controlled engagement.

The declaration often serves the very dynamic it claims to transcend.

The Self-Assessment Problem

If we can spot these patterns in others, why can’t we spot them in ourselves? The standard answer invokes the Dunning-Kruger effect: unskilled people lack the metacognitive ability to recognize their incompetence. But this explanation, while true, doesn’t help much. Telling someone they might be suffering from Dunning-Kruger is like telling them they might be dreaming—the warning can’t penetrate the condition it describes.

The more useful question is: what external resistance can we use to calibrate our self-assessment?

Performance under constraint provides the highest signal. The benchmark needs to be non-negotiable—something reality enforces regardless of your story about it. Can you navigate a country when use of that language is mandatory? Can you do the technical work when tired, distracted, under time pressure? Do people who don’t know you finish what you write? The environment either accommodates your performance or it doesn’t.

Involuntary selection by others matters too. Not whether people compliment you—that’s social lubrication—but whether they come to you when they have a problem and options. Especially telling if they have to overcome some friction to do so: you’re not convenient, you’re not their friend, but they need the thing done and they choose you anyway.

What doesn’t work: internal feelings of competence (uncorrelated with performance), compliments from people who benefit from relationship with you, success in environments you control, or comparison to your past self. Improvement doesn’t equal competence.

The Generalist’s Dilemma

But there’s a specific population for whom even these calibration methods become unreliable: people who are legitimately above-average at many things but exceptional at none.

The competent generalist faces a genuine self-assessment difficulty. You get real positive signal—people do benefit from your contributions, things do work better when you’re involved, you do solve problems others can’t. The feedback isn’t wrong, it’s just noisy about level.

Three mechanisms create the trap:

Selection effects obscure the ceiling. You naturally avoid or exit domains where you’d face serious competition. You’re comparing yourself to “people attempting this thing” not “people who specialized in this thing.” Your reference class flatters you without your noticing.

Broad competence masks the gap to excellence. The psychological distance from 60th percentile to 95th percentile is compressed. They both feel like “being good at it” from inside. But the performance difference is enormous. Someone at the 60th percentile can complete most tasks successfully. Someone at the 95th percentile produces work that changes how others think about the domain. These feel subjectively similar—both involve successfully solving problems—while being completely different in terms of what you can actually deliver.

You experience yourself as “someone who figures things out.” This is true—you do figure things out. But “figuring out” at different levels of capability feels identical from inside, even as the results diverge dramatically.

The competent generalist can accurately assess their limits in domains where they’ve gotten clear negative feedback, where the gap between their performance and competent performance was visible, where they couldn’t exit before the inadequacy became obvious. But in domains where “good enough” actually is good enough for the context, they never encounter the resistance that would calibrate their assessment.

Finding Your Ceiling

The correction requires looking for moments where you encountered your ceiling—not failure exactly, but the point where additional effort stopped translating to additional results. Where you plateaued despite caring about improvement.

These plateaus are more informative than your successes: conversations where you genuinely tried to understand someone’s technical domain and couldn’t follow beyond a certain depth, projects where you hit the limit of your architectural thinking, writing where you couldn’t make the argument tighter no matter how many passes.

These moments show you where “smart generalist” stops being sufficient.

The deepest issue: being good at many things creates an experience of capability that feels like it should generalize more than it does. You solve problems across domains, you learn quickly, you produce results. This creates a phenomenological sense of competence that maps poorly onto actual performance levels in any specific domain.

The Transmission Test

There’s one final calibration available, particularly for people building intellectual frameworks or systematic approaches: can others use your methods to get your results?

If you’ve developed a rigorous way of thinking—about self-assessment, about collaboration with AI, about anything—the test is whether your protocols actually transmit the capability, or whether your own intelligence does most of the work while the protocol gets credit.

A conversation that produces insights might demonstrate that good frameworks exist. It doesn’t prove those frameworks are weight-bearing. The structure might be sound, but if it only works when you’re operating it, you’ve documented your thinking process rather than built transmission infrastructure.

The competent generalist is especially vulnerable here. Your ability to make almost any framework work (because you’re calibrating and adjusting in real-time, because you’re filling gaps with general intelligence) can disguise whether the framework itself carries weight.

Living Without Certainty

None of this solves the self-assessment problem. It just makes it manageable.

The practical stance: hold your competencies as hypotheses, not identities. “I might be good at X” lets you test it. “I am good at X” makes disconfirming evidence threatening.

The test of whether you’re doing this right: can you genuinely update on evidence that you’re worse at something than you thought? If that feels like ego death rather than useful information, your identity is wrapped up in the claim.

For the competent generalist, this means accepting that you probably are above-average at many things, while remaining uncertain about whether you’re exceptional at anything. That uncertainty isn’t a bug. It’s the appropriate epistemic state given the calibration difficulties you face.

The person tweeting about their crisis management skills has solved the self-assessment problem by refusing to engage with it. They’ve chosen certainty over accuracy.

The harder path is living with the discomfort of not quite knowing how good you are, and using that discomfort to keep seeking better resistance, clearer signals, more honest feedback.

That discomfort might be the only reliable sign you’re doing it right.