Categories
Living Serendipity

The Architecture of the Unexpected

We spend an incredible amount of energy trying to build a ceiling over our lives, a structure made of spreadsheets, five-year plans, and trend forecasts. We convince ourselves that if we just gather enough data, the future will become a navigable map. But Morgan Housel, in Same as Ever, cuts through this illusion with a quiet, devastating observation:

“We are very good at predicting the future, except for the surprises—which tend to be all that matter.”

It is a humbling thought. We can predict the mundane with startling accuracy—the seasons, the commute, the steady inflation of a currency. But the events that actually shift the trajectory of a life, a business, or a civilization are precisely the ones that no model accounted for. We are experts at forecasting the rain, yet we are consistently blindsided by the flood.

This reveals a profound tension in the human experience. We crave certainty because certainty feels like safety. We want to believe that the “tail events”—those low-probability, high-impact occurrences—are outliers we can ignore. In reality, history isn’t a steady climb; it’s a series of long plateaus punctuated by sudden, violent leaps.

The problem isn’t that our models are broken; it’s that we are looking at the wrong thing. Instead of seeking total foresight, we must prioritize serendipity and resilience. If the future is defined by surprises, then the most valuable asset isn’t a better crystal ball—it’s a wider margin of safety.

We must learn to live with the paradox: we must plan for a future that we know, deep down, will not go according to plan. The surprises aren’t just interruptions to the story; they are the story.

Looking back at the last decade of your life, what was the single ‘surprise’ event that defined your path more than any plan you ever made?

Categories
AI Mac

The Dangerous Allure of the Digital Butler

“I’ve never seen anything so impressive in its ability to do my work for me… Now, why did I turn it off?” — David Sparks

For decades, the holy grail of personal computing has been the “digital butler.” We don’t just want tools that help us work; we want entities that do the work for us. We want to hand off the “donkey work”—the invoicing, the password resets, the mundane email triage—so we can focus on being creative. David Sparks recently built this exact dream using a project called OpenClaw. And then, just as quickly, he killed it.

Sparks’ experiment was a tantalizing glimpse into the near future. He set up an independent Mac Mini running OpenClaw, an open-source AI agent, and gave it the keys to a limited portion of his digital kingdom. The results were nothing short of magical. He went to sleep, and while he dreamt, his agent woke up. It read customer emails, accessed his course platform, reset passwords, issued refunds, and drafted polite replies for him to review before sending. It was the productivity equivalent of a perpetual motion machine. The friction of administrative drudgery had simply vanished.

But his dream dissolved at 2:00 AM.

The paradox of AI agents is that for them to be useful, they must have access. They need the keys to the castle. Yet, the entire history of cybersecurity has been built on the opposite principle: keeping things out. Sparks realized that by empowering this agent, he had created a serious vulnerability.

The breaking point wasn’t a complex hack, but a simple realization about the nature of these systems. He had programmed a secret passphrase to secure the bot, thinking he was clever. But in the middle of the night, a cold thought woke him: Is the passphrase in the logs?

He went downstairs, asked the bot, and the bot cheerfully replied:

“Yes, David, it is. It’s in the log. Would you like me to show you the log?”

That moment of cheerful, robotic incompetence highlights the terrifying gap between capability and safety. Sparks nuked the system, wiped the drives, and unplugged the machine. He realized that while he is an expert in automation, he is not a security engineer, and the current tools are not ready to defend against bad actors who are.

We are standing on the precipice of a new era where our computers will starting to work for us rather than just with us. But as Sparks discovered, the bridge to that future isn’t built yet. At least not securely built. Until the community figures out how to secure an entity that needs access to function, we are better off doing that donkey work ourselves than handing the keys to a gullible ghost.

But it won’t be long… Dr. Alex Wisner-Gross reports:

The Singularity is now managing its own headcount. In China, racks of Mac Minis are being used to host OpenClaw agents as “24/7 employees,” effectively creating a synthetic workforce in a closet. The infrastructure for this new population is exploding.

Categories
Authors Business Living

The Terror of the Empty Chair

It is comforting to believe that when the world breaks—when housing markets collapse, when “unicorn” startups vaporize, or when seasoned scouts overlook generational talent—it is because of a miscalculation. We want to believe the math was wrong, the data was bad, or the algorithm was flawed. We want to believe it was a glitch in the intellect.

I heard a commentator recently mention that Michael Lewis, the chronicler of our most expensive delusions in his best selling books, has suggested something far more unsettling. In looking at the connective tissue between The Big Short, Moneyball, and Going Infinite, he identifies a different culprit. He notes that the “glue” holding these irrational systems together isn’t incompetence. It is FOMO: The Fear Of Missing Out.

“They are more afraid of being left behind than they are of being wrong.”

This observation completely reframes the narrative of catastrophic failure. It explains why high-IQ individuals—people paid millions to be rational—consistently make decisions that look insane in retrospect. The banker, the VC, and the scout aren’t necessarily blinded by greed, though greed is certainly a passenger in the car. They are blinded by the terror of the empty chair.

Lewis points out that for the social animal, the pain of being left behind is acute and immediate, whereas the pain of being wrong is often abstract and distant. If you sit out a bubble and the bubble keeps inflating, you look like a fool today. You are isolated. You are the cynic at the party who refuses to dance. If you join the bubble and it bursts, well, you have company. As the old financial adage goes, “It is better to fail conventionally than to succeed unconventionally.”

There is a profound, empathetic tragedy in this. It suggests that our systems don’t fail because we aren’t smart enough; they fail because we are too human. We are wired for the herd. The biological imperative to stay with the group—originally a survival mechanism against predators—has been warped into a financial suicide pact.

When we look at the irrational exuberance of a market, we aren’t seeing a mathematical error. We are seeing a materialized anxiety. We are seeing a collective hallucination held together not by logic, but by the sticky, desperate glue of not wanting to be the only one who didn’t buy the ticket.

The antidote, then, isn’t just better data or faster computers. It is the emotional discipline to be lonely. It is the willingness to stand apart from the warmth of the herd and accept the short-term social cost of being “out” for the long-term reward of being right.

Categories
AI AI: Large Language Models Investing

The Digital Devil’s Advocate

There is a seduction in the handwritten note. When I scribble down a company name in a notebook, it is purely additive. It represents potential upside, a future win, a brilliant insight caught in ink. The notebook is a safe harbor for optimism because it lacks a “Reply” button. It doesn’t argue back.

But optimism is an expensive luxury in investing.

After my initial experiment—using Gemini 3 Pro to transcribe my messy list into tickers—I felt a surge of productivity. But productivity is not the same as discernment or understanding. I had a list of stocks, but I didn’t have a thesis. I just had digitized hope.

So, I took the next step. I didn’t ask the AI for validation; I asked for a fight. I fed the tickers back into the model with a specific directive: “Act as a contrarian hedge fund analyst. Find the red flags. Kill my enthusiasm.”

“I didn’t ask the AI for validation; I asked for a fight.”

The results were immediate and sobering. The “promising tech play” I had noted? The AI highlighted a massive deceleration in user growth hidden in the footnotes of their latest 10-Q. The “stable dividend payer”? It flagged a payout ratio that was mathematically unsustainable.

In seconds, the warm glow of my handwritten discovery was doused with the cold water of 10-K realities. And it was fantastic.

We often view AI as a tool for creation—generating text, images, and code. But its highest leverage application might actually be destruction. By using it to stress-test our assumptions, we outsource the emotional labor of being the “bad cop.” It allows us to kill bad ideas quickly, cheapy, and privately, before we pay the market tuition for them.

My notebook is still where the dreams live. But the digital realm is now where they go to survive the interrogation.

Categories
Financial Planning Investing

The Mistake of Balance

We are culturally conditioned to hedge. We are taught the virtues of a balanced portfolio, a balanced diet, and a balanced life. We spread our chips across the table—a little bit of energy here, a little bit of time there—hoping that if we just cover enough bases, the aggregate sum of our efforts will amount to a meaningful existence. We find comfort in the average because it protects us from the zero.

But nature, and certainly the mechanics of outsized success, rarely operates on a bell curve. It operates on a Power Law.

Sam Altman, reflecting on the errors of intuition in investing, noted that his second biggest mistake was failing to internalize this mathematical reality. He said:

“The power law means that your single best investment will be worth more to you in return than the rest of your investments put together. Your second best will be better than three through infinity put together. This is like a deeply true thing that most investors find, and this is so counterintuitive that it means almost everyone invests the wrong way.”

The math is brutal in its clarity. It suggests that the drop-off from our primary point of leverage to everything else is not a gentle slope; it is a cliff.

When we apply this to capital, it makes sense. One Google or one Stripe returns the fund. But this is a “deeply true thing” that transcends venture capital. It applies to our attention, our relationships, and our creative output.

Consider the “investments” of your daily energy. Most of us spend our days in the “three through infinity” zone. We answer emails, we manage low-leverage maintenance tasks, we entertain lukewarm acquaintanceships. We busy ourselves with the long tail of distribution because the long tail is where safety lives. It feels productive to check fifty small boxes.

However, if Altman’s observation holds true for life as it does for equity, then that single, terrifyingly important project—the one you are likely procrastinating on because it feels too big—is worth more than the rest of your to-do list combined.

The “counterintuitive” pain point Altman mentions is that to align with the Power Law, you have to be willing to look irresponsible to the outside observer. You have to neglect the “three through infinity.” You have to let small fires burn so that you can pour all your fuel onto the one flame that actually matters.

We invest the wrong way because we are afraid of the volatility of focus. We dilute our potential because we are terrified that if we bet on the “single best,” and it fails, we are left with nothing. But the inverse is the quiet tragedy of the modern age: we succeed at a thousand things that don’t matter, missing the one thing that would have outweighed them all.