Published on
·7 min read

Ignore the Feathers

Authors
  • Avatar of هاني الشاطر
    Name
    هاني الشاطر
    Twitter

Every team I've worked with has more good ideas than capacity. That's never the problem. The problem is selection — how do you look at thirty directions and pick the five that matter, when every direction comes wrapped in expertise, credentials, and a compelling narrative?

For that, we need to talk about peacocks. Bear with me.

In 1975, Amotz Zahavi proposed the handicap principle: a signal is credible precisely because it's costly to produce. The peacock's tail — three feet of iridescent absurdity that basically screams "eat me" to every predator — works as a mating signal not despite being ridiculous, but because it's ridiculous. Only a genuinely fit bird can walk around looking like a Vegas showgirl and survive. The signal stays honest because the cost is real and the feedback is local: one peacock, a few peahens, one territory.

We don't live in that world. Two things pulled us out of it. First, we stopped passing genes and started passing ideas — but initially person to person, where the interaction stayed honest because people could push back and verify. Second, we started broadcasting. You can't sit across from everyone you want to influence, so you attach ideas to narratives, and the narratives travel where you can't.

"I'm a Bayesian." "We're in information retrieval." "We're data scientists." These aren't knowledge claims. They're tribal flags. You plant one, join a tribe, and the tribe becomes a propagation network. Ideas that fit the tribe's narrative face less friction — they get cited more, questioned less, hired for, promoted. The scrutiny is real. The asymmetry is too.

And the person planting the flag isn't some charlatan. The researcher's costly signal — real PhD, real credentials, real years of grinding — now passes ideas to thousands. The cost is genuine. But the function has shifted from quality to spread. Fancy papers that never include the embarrassing baseline. Research agendas shaped by what travels well rather than what works. The expertise is real. The relationship to measurable improvement is optional.

So what do you call it when nobody is lying but the truth has drifted out of the picture? Harry Frankfurt distinguished liars from bullshitters. A liar knows truth and hides it. A bullshitter doesn't care about truth at all.

We celebrate the low-functioning kind. "Fake it till you make it." The junior engineer who oversells on a whiteboard, the startup founder who pitches a product that doesn't exist yet. It's easy to spot, easy to discount, and honestly, sometimes it works out. The fakery is visible. You can price it in.

The dangerous kind is what I'd call high-functioning bullshit — and it looks nothing like faking. High-functioning bullshit is what happens when genuine expertise, genuine credentials, and genuinely costly signals get oriented not toward truth — but toward influence. Nobody is performing. That's what makes it so hard. The researcher has internalized the tribal narrative so deeply that sophistication feels like rigor. They defend their approach with authentic conviction because the conviction is authentic. You can't find inconsistencies — there aren't any. You can't detect insincerity — there is none. The self-deception isn't a bug. It's the armor.

The only thing missing is contact with reality — has anyone tested this? — and that question never feels urgent when the narrative is this polished. It would feel almost rude.


You might think the fix is obvious: make people pay for being wrong. Taleb saw part of this. Skin in the game: a signal is credible when the signaler bears the downside. The principle is sound. But broadcasting breaks it. The researcher who championed the fancy approach has moved companies by the time you discover the boring alternative worked. Their reputation is intact. Their signal traveled to thousands who will never see the follow-up. Your roadmap absorbed the cost.

People notice, of course. Engineers call out over-engineering. Simplicity is the real skill! Then something beautiful and terrible happens. SmolLM. Minimalism. The critique of costly signaling grows its own costly signals — new tribe, new credentials, new propagation network. The meta-game absorbs the critique and keeps running.

If even the rejection of high-functioning bullshit becomes high-functioning bullshit, where do you stand?


You stand in a conference room, running a roadmap workshop. The whiteboard is full of ideas, and every one of them has a peacock behind it — a prestigious paper, a conference keynote, a success story from another company. You have a team, a quarter, and real outcomes that someone above you will hold you to — retention, revenue, latency, whatever your org actually bets on. You're not selecting mating partners. You're allocating engineering months.

And every one of those ideas radiates genuine, hard-won, expensively acquired confidence. The conviction behind them is sincere. Nobody is lying. But the people those ideas came from — the paper authors, the keynote speakers, the ones who built the thing somewhere else — aren't on the hook if retention doesn't move.

That's you. That's the hook. The conviction walks away intact. You get the dashboard.

What makes it worse: sometimes the expert is right. And discounting expertise wholesale is just another tribal flag — a lazy one. You can't tell the difference between an expert who's right and an expert who's captured. They look identical, sound identical, feel identical from the inside. So stop trying to tell the difference. Test the idea instead of judging the source.


There's no clean exit from this. You can't verify everything. That's Tuesday. But you can shrink the room down to village size — small enough that consequences land on ideas, not on your roadmap.

Put skin back in the game — your game. Before anyone gets attached, agree on what failure looks like. "If we don't see X% lift in two weeks, we stop." Write it down. If an idea's author won't define the exit condition, the narrative is doing the talking — not the evidence.

Time-box and invalidate fast. High-functioning bullshit thrives in long planning cycles where stories harden before anyone experiments. Two weeks, clear metric, go or stop. Even deep R&D can answer one question: what would we expect to see in two weeks if this were going to work? If the answer is "nothing, just trust the vision" — that's the narrative again.

Score to break the spell. Give every idea a score: expected metric lift × justified confidence × inverse cost. Confidence must be justified, not felt — what evidence actually exists? Cost means time-to-first-signal — how fast can we know if this works at all? It's much harder to let a narrative carry you across three dimensions than one.

Make killing ideas prestigious. When something doesn't survive data, the person who killed it early bought back engineering months. That's the most valuable thing in your org: a fast answer. Make it more celebrated than the person whose idea survived by never being tested.

Stay skeptical of narratives, including this one. "Just watch the metrics" is also a story. Goodhart's ghost haunts every dashboard ever built. But a metric argues back. A narrative just sits there being elegant. When you chase the wrong metric, it eventually tells you. When you chase the wrong narrative, nothing breaks — it just feels right, quarter after quarter, until someone runs the embarrassing baseline and the room goes quiet.

The peacock's tail works because the cost is real and the feedback is local. You can't fix the world's signaling problem. But you can make the room small enough that it works like a village again — where consequences land on ideas, not careers, and the feedback loop is shorter than a job change.

The feathers are gorgeous. Ignore them. Watch the outcome. And when the metric lies — at least you'll know. The narrative never even offered you that.