I found myself staring at the physical geometry of a conversation the other day—not the words, but the topography of the faces delivering them.
Elad Gil recently shared a fascinating experiment during a conversation with Tim Ferriss. He’s been uploading photos of startup founders into AI models and asking the machines to predict if they’d be successful, purely based on their “micro-features.”
“Because if you think about it, we do this all the time when we meet people, right? We quickly try to create an assessment of that person, their personality, and what they’re like. There are all these micro-features—like, do you have crow’s feet by your eyes, which suggests that your smiles are genuine? […] So, I have this whole set of prompts that I’ve been messing around with, just for fun, around: ‘Can you extrapolate a person’s personality based off of a few images?'”
He notes the model breaks down the crow’s feet and the furrowed brows, extrapolating a personality from a static frame. It’s a parlor trick, perhaps. But it works because it holds a mirror to our oldest, most unexamined instinct.
We are all amateur phrenologists of the human face. We sit across a table, measure the crinkle of an eye or the tightness of a jaw, and we build a rapid, invisible architecture of trust or suspicion. Over decades of investing and making career choices, I’ve often leaned heavily on this silent language. I’ve backed founders because their intensity felt genuine, and I’ve passed on others because something in their posture felt misaligned.
But if I am brutally honest, that intuition has sometimes been a mask for my own blind spots. I’ve held on to failing investments for far too long because I trusted a reassuring smile. We like to think our gut instinct is a sophisticated instrument. Often, it is just a pattern-matching engine running on deeply flawed historical data.
Now, we are handing that very human habit over to a machine. We prompt the AI to become a “cold reader,” and it obliges, predicting who will be the quiet observer and who will deliver the dry wit.
The unsettling part isn’t that the machine might get it wrong. The unsettling part is that it might get it exactly right—by mimicking the very same rapid, superficial judgments we make every day, just at a terrifying scale.
We are teaching silicon to read the human code. The future will belong to those who realize the code was always written in our own biases.
You must be logged in to post a comment.