The Thing I Didn't Know About the Thing I Thought I Knew

The Thing I Didn't Know About the Thing I Thought I Knew

I was going to write this article because I thought I knew all there was to know about the Dunning-Kruger effect. I sat down, cracked my knuckles, and prepared to hold forth on the subject of people who don't know what they don't know. Without a shred of irony. Without a flicker of self-awareness that I was about to become the very thing I was writing about.

Let me tell you what happened next.

The first thing I discovered is that the graph is fake. You know the one. Mount Stupid, the Valley of Despair, the Slope of Enlightenment -- that satisfying curve that gets wheeled out in every conference talk, every LinkedIn post, every smug conversation about why other people are wrong about things. That graph does not appear in the original 1999 Kruger and Dunning paper. Not once. The paper contains quartile bar charts comparing perceived ability to actual test scores among Cornell undergraduates sitting logic and grammar exams. No mountain. No valley. No slope. The curve that half the internet attributes to two Cornell psychologists was drawn by Zach Weinersmith in a webcomic in 2011, probably influenced by the Gartner Hype Cycle, which itself is a marketing framework dressed up as science.

I did not know this. I had cited that curve. I had used it in presentations. I had nodded along knowingly when others used it, as though I were intimately familiar with the underlying research.

I was not.

Article content
A Dunning-Kruger curve being erased from a whiteboard, revealing nothing beneath

The second discovery was worse. In 2022, a researcher called Blair Fix published an analysis showing that the Dunning-Kruger effect can be reproduced using entirely random data. The curve is an artefact of autocorrelation. When you plot people's self-assessment error against their actual performance, you are plotting a variable against a component of itself. The maths produces the curve whether or not the psychology exists. Gignac and Zajenkowski, in a 2020 study, used proper statistical methods and found "much less evidence" for the effect than the original paper claimed. McGill University's Office for Science and Society now states flatly that the Dunning-Kruger effect is "probably not real."

The original 1999 paper has roughly 7,893 citations. The debunking papers have about 88 between them.

Here is what that means: the most famous psychological concept about people who don't understand things they claim to understand may itself be a thing that people don't understand while claiming to understand it. The persistent irony is not incidental. It is structural.

Every one of us has been complicit in it, and not one of us bothered to check.

What the Paper Actually Tested

The third thing I learnt is that virtually everyone misuses the concept. The original study tested Cornell undergraduates on logical reasoning and grammar. Not the general public. Not "dumb people versus smart people." Not experts versus novices in any real-world domain. The dramatic effect that Kruger and Dunning found was in relative self-placement -- how students ranked themselves compared to their peers. When tested with direct methods, about 80% of the bottom-quartile students could accurately assess their own absolute competence. They knew roughly how well they had done. They just couldn't estimate where they sat relative to everyone else.

This matters because the way Dunning-Kruger gets deployed in the wild -- "that person is too stupid to know they're stupid" -- is not what the paper found. Scientific American pointed this out years ago. Nobody listened. The meme was too satisfying. The feeling of superiority it grants -- "I, unlike those people, can see my own limitations" -- is too intoxicating to surrender to mere evidence.

We have all cited that curve. We have all nodded along. And we have all, at some point, used it to feel cleverer than someone else in the room.

Which brings me to skydiving.

153 Jumps

I have done 153 solo skydives. Never a tandem. I started jumping out of aeroplanes because I wanted to, not because I was strapped to someone who knew what they were doing. To a layperson, 153 is a lot. At a dinner party it sounds reckless, impressive, slightly unhinged. To anyone in the sport, 153 makes me a relative noob. The serious skydivers -- the ones doing formation work, wingsuit proximity flying, canopy piloting -- have thousands of jumps. I am, in the language of drop zones, a "low-timer."

I have always used this to self-deprecate:

"I'm best case aspiring to be either climbing towards Mount Stupid, or on the decline back to the Valley of Despair."

It is a line I have delivered many times. It gets a laugh. It makes me sound humble and self-aware. It signals that I understand the Dunning-Kruger effect and have inoculated myself against it through the hard-won wisdom of knowing my place on the curve.

Except the curve is fake. And the self-deprecation is doing something I did not intend it to do.

Feltovich, Harbaugh and To formalised counter-signalling theory in 2002. The insight is devastatingly simple: high-ability agents can afford not to signal their competence. When a genuinely skilled person says "I'm not that good," it does not read as humility. It reads as proof that they are good enough not to need to say so. The self-deprecation is a power move. It is the intellectual equivalent of a billionaire wearing a hoodie. The modesty is the flex.

When I say "153 jumps, that's nothing in the sport," what you hear is: this person has done 153 solo skydives and is so comfortable with that fact that he can dismiss it. The self-deprecation does not reduce my status. It amplifies it. Harvard researchers Sezer, Gino and Norton showed in 2018 that humblebragging backfires worse than straightforward bragging. People see through it. They like you less for it than if you had simply said "I've done 153 skydives and I loved every one of them."

I did not know this either. The self-deprecation I thought was my most honest move was, it turns out, my most dishonest one.

Article content
AI Generated: an empty skydiving rig hanging in a sunlit hangar, evoking courage and self-deception

Then Like Now

Bertrand Russell, writing in 1933 in an essay called The Triumph of Stupidity, put it with a precision that has not been bettered in the ninety-three years since: "The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt." He was writing about the rise of the Nazis. The context matters. Russell was not making a dinner-party observation about overconfidence. He was watching a continent slide towards catastrophe and diagnosing -- with the clarity of a logician who had co-written Principia Mathematica -- the asymmetry of conviction that made it possible. The stupid were cocksure. The intelligent doubted themselves. And the stupid won, because certainty is a weapon and doubt is not.

Russell saw it in 1933. Socrates described it twenty-four centuries before that. Kruger and Dunning did not discover anything. They gave a 2,500-year-old observation empirical clothing -- and even the empirical clothing now appears to be made of autocorrelation.

The instinct to feel superior to the overconfident is ancient. What Dunning-Kruger did was make that instinct seem scientific. It gave us a graph -- a fake graph, from a webcomic -- and a citation, and we could point at people we disagreed with and say "Dunning-Kruger" instead of saying "I think you're wrong." It became the intellectual's "because I said so."

I was doing exactly this. In my head, I had sorted the world into people who knew about Dunning-Kruger (enlightened, like me) and people who didn't (the poor sods still stuck on Mount Stupid). The categorisation was itself a demonstration of the very bias I thought I was immune to.

The Bias Blind Spot

Here is the final knife, and it cuts all of us. Pronin, Lin and Ross demonstrated in 2002 that knowing about cognitive biases makes you think you are less susceptible to them. Not more. Less. This is the bias blind spot: the meta-bias, the one that weaponises your own knowledge against you. The more you know about Dunning-Kruger, the more confident you become that it applies to other people and not to you. Cognitive debiasing research confirms it -- awareness does not debias. It adds a layer of false security.

We have all done this. We have all learned a concept, felt the warm glow of understanding, and then immediately deployed it as a weapon against someone we thought understood it less well. The bias blind spot is not an edge case. It is the default human response to learning about bias.

I sat down to write an article about people who don't know what they don't know. I was going to explain it to you. I was going to cite the research, draw the curve, reference the original paper, and demonstrate my sophisticated understanding of a concept that turns out to be a statistical artefact wrapped in a webcomic illustration popularised by people who never read the study they were citing.

People like me.

Article content
AI generated: An empty lecture theatre with a blank projection screen, the architecture of authority without content

The Hard Part

The hard-easy effect tells us that self-assessment is contextual, not fixed. There is no stable "place on the curve" because there is no stable curve. My 153 jumps make me an expert to my mother and a beginner to anyone at Skydive Perris. The expertise is not a property of me. It is a property of the room I am standing in.

So the next time someone drops "Dunning-Kruger" into a conversation, ask yourself one question: is this person using the concept to understand something, or to win an argument? That is your diagnostic. If the answer is "to win," you are not witnessing insight. You are witnessing the bias blind spot in real time -- a thought-terminating cliche dressed up as psychology, deployed to shut down a person rather than engage with what they are saying. Stop nodding along. Name what is actually happening.

There is something genuinely uncomfortable about realising that the tool you use to demonstrate your self-awareness is itself a demonstration of your lack of self-awareness. I cannot resolve this. I cannot now pivot to a new, corrected understanding of Dunning-Kruger and deploy it with the same confidence I had before, because the whole point of the last 2,000 words is that the confidence was the problem.

What I can tell you is this. I thought I was writing an article about other people. I was writing an article about myself. The research process for this piece -- which I began as a victory lap through well-understood territory -- became instead a series of discoveries that each, in turn, made me feel more foolish than the last. The fake graph. The autocorrelation. The counter-signalling. The bias blind spot.

Every layer peeled back revealed another layer of my own overconfidence about a concept that describes overconfidence.

Socrates was right.

Knowing that you know nothing is not the end of wisdom. It is the beginning of the realisation that even that knowledge -- the knowledge of your own ignorance -- is something you can be wrong about.

I have done 153 skydives. I don't know what that means about me. And for the first time, I am not going to pretend that not knowing is the same as being wise.

But I will tell you this: the next time someone invokes Dunning-Kruger to dismiss a colleague in a meeting, ask them if they have read the paper. Ask them where the graph comes from. Watch what happens to their confidence.

(Views in this article are my own.)