Adithyan Ravikumar Adithyan Ravikumar
· Thoughts · · 7 min read

The Sky Is a Lie. So Is Your Certainty.

"Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth." - Marcus Aurelius

The Sky Is a Lie. So Is Your Certainty.

I was staring at a sunset last week. Orange bleeding into yellow, the kind of sky that makes you stop mid-sentence. And somewhere between admiring it and reaching for my phone, a thought hit me.

None of those colours were out there. Not the orange. Not the yellow. Not the blue sky that had been overhead all afternoon. The entire display was happening inside my skull.

There Is No Red

When you see something red, here’s what’s actually happening. Electromagnetic radiation at a wavelength of roughly 700 nanometres hits your retina. Your visual cortex processes that signal and produces the experience of redness. The wavelength itself has no colour. The object reflecting it has no colour either. You gave it one.

And if colour, one of the most obvious, unquestioned features of daily life, is a construction, the question gets uncomfortable fast. What else did the brain build?

Turns out: everything.

“Reality exists in the human mind, and nowhere else.” — George Orwell

Sound is pressure waves. Silent universe. Your auditory cortex converts compression patterns into the experience of music, of a voice you recognise, of a door slamming. Pain is not a direct readout of tissue damage, it’s a threat assessment: soldiers in combat routinely report feeling nothing from serious wounds until the danger passes. Taste is chemistry meeting memory. Time isn’t perceived at a constant rate. Fear slows it. Flow compresses it. We’re not measuring duration. We’re constructing it.

The physical universe, at the level physics actually describes, is fields and particles. No colour, no sound, no warmth, no taste. Every quality that makes experience feel like something was generated by biological tissue sitting in a dark, silent box of bone. What we call reality is a species-specific interface. Built for survival, not for accuracy.

The Interface Doesn’t Tell You It’s an Interface

The construction isn’t the problem. The problem is that the construction feels like direct contact with the world.

You don’t experience seeing as interpreting. You just see. The brain presents its conclusions without showing its working. No asterisk. No margin of error. Just a clean, confident picture that says: this is what’s out there.

The predictive processing framework, one of the most influential and widely discussed frameworks in cognitive neuroscience, explains why. Your brain doesn’t passively receive sensory data. It generates a prediction of what should be out there, then checks incoming signals against that prediction. Most of what you perceive at any given moment is the prediction, not the raw input. Sensory data mainly serves to correct the model when it’s wrong.

Which means something worth sitting with: you are, most of the time, experiencing your own expectations.

This is efficient. An animal that had to process every sensory input from scratch would be too slow to survive. But efficiency and accuracy are different things. The brain optimises for the first. Not the second.

Where Perception Becomes Opinion

Here’s where it stops being an interesting fact about vision and starts being a problem we carry into every conversation, every decision, every argument.

The same architecture that constructs colour constructs your beliefs. And it uses the same shortcuts.

Confirmation bias isn’t a character flaw. It’s the predictive brain doing exactly what it was designed to do. The system preferentially surfaces information that matches its existing model, because that’s literally how prediction works. Data that confirms the model passes through easily. Data that contradicts it gets flagged, filtered, or reinterpreted. This happens before you’re consciously aware of it. You’re not choosing to ignore contradictory evidence. Your brain is choosing for you.

Then there’s what psychologists call motivated reasoning. When a belief is tightly integrated with identity, the brain doesn’t process contradictory evidence neutrally. It processes it as a threat. This doesn’t mean people can’t change their minds, evidence shows that corrections do work more often than we think. But the processing is asymmetric: the architecture makes it significantly easier to accept information that fits the existing model than information that contradicts it.

Memory is the same story. Every time you recall an event, you’re not playing back a recording. You’re reconstructing it from fragments, filling gaps with whatever currently seems plausible. Your memory of last Tuesday is a story your brain is telling you right now, shaped by your current mood, your current knowledge, everything that’s happened since. This is why eyewitness testimony is so problematic in courtrooms: not because people lie, but because the retrieval process is closer to creative writing than to opening a file. The Innocence Project has found that mistaken eyewitness identification is the single largest contributing factor in wrongful convictions later overturned by DNA evidence.

And the part that ties all of this together: the feeling of certainty is itself just a brain state. Not a measure of accuracy. You can feel absolutely, bone-deep certain about something that is completely wrong. Certainty is the brain flagging a strong, stable model. That’s all it is. It says nothing about whether the model matches what’s actually out there.

A brain doing its job perfectly can still be confidently, systematically wrong, and feel no different from the inside than a brain that’s correct. There’s no internal alarm that goes off when your model diverges from reality. The alarm system is part of the model.

Why This Actually Matters

It’s easy to read all this and think “interesting but academic.” It isn’t.

We’re living through a period where our constructed realities are diverging faster than ever. Two people can look at the same news event, the same data, the same video, and arrive at completely opposite conclusions, both feeling certain. That’s not because one side is stupid and the other is smart. It’s because their predictive models, shaped by different inputs, different environments, different algorithms feeding them different information, are constructing different realities. And both constructions feel equally solid from the inside.

Social media amplifies this by feeding the prediction engine exactly what it expects. Every recommended post, every algorithmic suggestion is a confirmation of the existing model (More on this here: Internet Multiverses) . The brain gets better and better at predicting what it will see next, which means it gets less and less exposed to prediction errors, which means the model gets more rigid, more confident, and further from anything resembling a shared reality. We’ve essentially built technology that exploits the brain’s oldest architecture.

This applies at the personal level too. That business decision you feel certain about? The certainty might be the prediction engine running smoothly, not the prediction engine running correctly. That memory of a conversation where someone said something hurtful? Your brain may have reconstructed the words to match how the interaction felt, not what was actually said.

We’re making decisions all day, every day, with a tool that prioritises speed and pattern-completion over accuracy. And the tool doesn’t come with a warning label.

What We Actually Do With This

The temptation is to slide into “be more open-minded” territory, but that’s vague to the point of uselessness. The research points to something more specific.

Actively look for disconfirming evidence. Not as some noble exercise in intellectual humility, but because the architecture will not do this on its own. The prediction engine filters for confirmation by default. Overriding that takes deliberate effort, every single time. I’ve started a small practice: when I feel strongly about something, I search for the best argument against my position. Not a strawman. The actual strongest case. It’s uncomfortable. That discomfort is the prediction engine resisting an update. Which is exactly why it’s worth doing.

Slow the gap between perception and conclusion. The brain’s snap judgments evolved for physical survival, where speed matters more than nuance. They are significantly less reliable for the social, political, and intellectual questions we now ask them to handle (which, to be fair, is not the problem the brain was built to solve). When someone says something that triggers an immediate emotional response, that response is the prediction engine firing, not you arriving at a considered position. The considered position comes later, if you give it room.

Treat other people’s perceptions as data, not error. If your construction and someone else’s diverge, the difference is information. They might be picking up a signal your model is filtering out. This isn’t about being agreeable. It’s about recognising that every interface has blind spots, and other people’s blind spots are usually in different places than yours.

And notice certainty. Not to distrust everything you think, but to recognise that the stronger the feeling of “I just know this is true,” the more it’s worth examining. The feeling is generated by the same system that invented the colour of the sky.

The sky isn’t blue. You are. The question worth carrying around isn’t whether our brains are unreliable. They are, measurably, by design. The question is whether we know which parts of our reality are the sky and which parts are us.

Enjoyed this piece?

Tagged in

neuroscience perception cognitive-biases psychology

Share this

Join my newsletter

← Back to all writing