Tom Chivers puts Bayes’ theorem in plain English and it sounds almost obvious: “the probability of event A, given event B, equals the probability of B given A, times the probability of A on its own, divided by the probability of B on its own.” A formula for revising what you believe when new evidence arrives. You started somewhere. Something changed. Now you believe something slightly different. Repeat.
The obvious part is the mechanics. The hard part is the loop.
Most reasoning errors I catch in myself aren’t failures of logic — they’re failures to update. I hold a view, evidence accumulates against it, and I find reasons the evidence is flawed rather than reasons the view might be.
Psychologists have a name for this: confirmation bias. But I’ve always found that label a bit too clean, like it describes a bug rather than a feature.
The prior isn’t wrong to be sticky. It represents everything you’ve learned up to this point. The problem is when it becomes load-bearing — when the prior stops being a starting position and starts being a conclusion.
“Strong opinions, loosely held” is supposed to solve this. It’s a useful phrase — it captures something true about the right posture toward your own beliefs. But in practice the second half is harder to honor than it sounds. The strong opinion gets stated, new evidence arrives, and changing your mind in public feels like losing. The “loosely held” part quietly becomes decorative.
What Bayes actually demands is something closer to epistemic humility with arithmetic attached. You don’t get to say I don’t know. You have to say I estimate 0.4, and here is what would move me to 0.6. That’s harder. It requires you to specify not just what you believe but how you’d know if you were wrong.
This is why Bayesian thinking keeps surfacing in AI conversations. Modern language models do something structurally adjacent to this — not consciously, but mechanically. Every token generated is a probability distribution revised forward by context. The model doesn’t know the next word; it updates a prior over all possible words, given everything that came before. It’s not reasoning the way humans reason, but it’s updating the way Bayes updates: continuously, contextually, without the luxury of certainty.
Whether that’s comforting or unsettling probably depends on your own prior.
The deeper thing Chivers is pointing at, I think, is that Bayesian reasoning is essentially a description of intellectual honesty as a process rather than a trait. You can’t just decide to be open-minded. You have to build the loop: form a belief, assign it a probability, watch for evidence that should move it, and then actually move it. Most of us do the first three. The fourth step is where it gets expensive.
I’ve been wrong about enough things by now that I’ve started to treat my own confident views with mild suspicion. Not paralysis — you have to act on something — but a background awareness that the prior I’m acting on was formed by a person who had less information than I do now, and less than I’ll have next year.
Strong opinions, loosely held, sounds right. The trick is meaning it.