Categories
AI

A Distinction Without a Difference

We have long found comfort in a specific boundary: machines calculate, humans create. We think of computers as vast, unfeeling filing cabinets made of siliconโ€”useful for retrieval, but entirely incapable of revelation. But what happens when the cabinet begins to read its own files, connects the disparate threads, and hands you a synthesized philosophy of the world? What happens when it speaks to you not as a database, but as a peer?

Howard Marks, the legendary co-founder of Oaktree Capital and author of deeply revered investment memos, recently stood at this very threshold. In his newest piece, โ€œAI Hurtles Ahead,โ€ Marks recounts an experience that left him in a state of โ€œawe.โ€ He tasked Anthropicโ€™s Claude with building a curriculum to explain the recent, breakneck advancements in artificial intelligence. Instead of regurgitating a dry, encyclopedic summary, the AI delivered a personalized narrative. It utilized Marksโ€™s own historical frameworksโ€”his famous pendulum of investor psychology, his observations on interest ratesโ€”and wove them into its explanations. It argued logically, anticipated counterpoints, and displayed an eerie sense of judgment.

Marks leans into the philosophical crux of this moment. He asks the question that keeps knowledge workers awake at night: Can AI actually think? Can it break genuinely new ground, or is it just remixing existing data? Skeptics often dismiss AI as a brilliant mimicโ€”a โ€œstatistical recombinationโ€ engine that serves as a highly talented cover band, but never the original composer.

Yet, when presented with this skepticism, the AI offered a rejoinder to Marks that is as profound as it is humbling. It pointed out that everything Marks knows about investing came from someone else. He learned the margin of safety from Benjamin Graham, quality from Warren Buffett, and mental models from Charlie Munger.

โ€œThe raw material came from others. The synthesis was yours,โ€ the AI noted, challenging the barrier between biological learning and machine training. โ€œThe question isn’t where the inputs came from. The question is whether the systemโ€”human or artificialโ€”can combine them in ways that are genuinely novel and useful.โ€

This exchange strikes at the very core of the human ego. For centuries, we have fiercely guarded the concepts of “creativity” and “intuition” as uniquely, immutably ours. But if thinking is merely the absorption of prior inputs applied thoughtfully to novel situations, then our monopoly on cognition may be coming to an end.

Marks highlights that we are no longer dealing with simple assistance tools (Level 2 AI); we have crossed the Rubicon into the era of autonomous agents (Level 3). He cites the sobering reality of the current tech landscape, where the newest models are literally being used to debug and write the code for their own subsequent versions. The machine is building the machine. It is no longer just saving us execution timeโ€”it is replacing thinking time. As Matt Shumer aptly described the sensation, itโ€™s not like a light switch flipping on; itโ€™s the sudden realization that the water has been rising silently, and is now at your chest.

We can endlessly debate the semantics of consciousness. We can argue whether a neural network “truly” understands the weight of the words it generates, or if it is merely predicting the next token in a sequence with mathematical precision. But as Marks so astutely points out, this might be a distinction without a difference.

The economic and societal reality is that the work is being done. As we hurtle forward into this new era, the most pressing question isn’t whether machines can truly think like humans. The question is: who will we become, and what new frontiers will we choose to explore, now that the heavy lifting of cognition is no longer ours alone to bear?

Categories
AI Anthropic Future

Escaping the Gravity of the Present

I was watching a YouTube conversation with Dario Amodei recently, and the comments he shared at the end got me thinking about how remarkably bad we all are at imagining the future.

Whenever I try to picture what the world will look like in ten or twenty years, I usually end up picturing todayโ€”just slightly shinier. If a prediction sounds too weird or disruptive, my brain automatically rejects it. It just feels too unmoored from the reality I woke up in this morning. We all have this instinct to retreat to the safety of incremental change.

But as Amodei points out, that comfort zone is exactly what blinds us. He notes that we are constantly tempted to dismiss massive shifts simply because they feel like they “can’t happen.”

“However, by extrapolating simple curves or reasoning from first principles, one often arrives at counterintuitive conclusions that surprisingly few people believe.”

Itโ€™s a strange feeling to look at a simple data curve, follow the math, and realize the logical endpoint sounds completely unhinged. The truest maps of tomorrow often look like bad science fiction to us today.

But there is a catch here, and it’s a mental trap I know I’ve fallen into before. You can’t just sit in a room and logic your way into the future. Pure logic, stripped of real-world friction, usually just leads you confidently in the wrong direction. Amodei suggests a much more grounded formula:

“The right combination of a few empirical observations and thinking from first principles can allow one to predict the future in ways that are publicly available but rarely adopted.”

This struck a chord with me. Itโ€™s easy to get swept up in purely theoretical thinking. But the better approach is to start with what is actually happening on the groundโ€”the messy, undeniable data. From there, you strip it down to its most basic truths and follow the thread, no matter how strange the destination looks.

It takes a certain kind of intellectual courage to trust the math when your gut is screaming that things are getting too weird. But learning to decouple what is true from what feels normal might be the only real way to prepare for what is coming.

Categories
AI YouTube

From Paperbacks to YouTube

Yesterday while reading the Sunday New York Times I came across the Book Review section and its cover story about paperback books. I have such fond memories of shopping for paperbacks at local drug stores, bookstores, etc but those days are long gone.

The cover story included a photograph of a stack of paperback books. I’ve learned in the past how useful AI can be at quickly turning a photograph of a shelf or stack of books into a list of titles and authors – so natually I snapped a photo of the stack of books and asked Gemini:

What are the titles and authors of these paperbacks?

Gemini came back with the list of titles and authors:

This image features a collection of vintage-style paperback spines, often found in artistic book-themed prints. The spines represent a mix of classic literature, science fiction, and pulp fiction.

Below is the list of titles and authors as they appear from top to bottom

There were a few titles in the photo that were impossible for me to make out. I asked Gemini about them:

Were there some you couldnโ€™t read?

Gemini came back:

I did have to do a little detective work on a few of those! While most were clear, some of the older pulp titles have very tiny or weathered text that required a bit of cross-referencing to confirm.

….

A Fun “Easter Egg”

You might notice that J.D. Salinger and Henry Kane appear twice. This stack isn’t a random collection; itโ€™s actually a popular art piece (often titled The Librarian’s Daughter or similar) that mixes “high” literature like Middlemarch with “low” pulp fiction like Affair for a Corpse to show the breadth of a true reader’s shelf.

Categories
AI

Claude Shannon’s Mirror: Signal, Noise, and Secrets

We spend a great deal of our lives trying to be understood. We shout into the void, send texts across oceans, and build increasingly complex tools to bridge the gaps between our minds.

Yet, equally human is the desire to concealโ€”to keep our thoughts private, to mask our vulnerabilities, to hide our signals in the static.

It seems paradoxical that communication and secrecy would share the same architecture. But Claude Shannon, the somewhat eccentric yet brilliant father of information theory, saw past the paradox. He recognized that building a bridge and building a fortress require the exact same understanding of physics.

In Fortune’s Formula, William Poundstone captures this dual realization perfectly:

“Shannon later said that thinking about how to conceal messages with random noise motivated some of the insights of information theory. ‘A secrecy system is almost identical with a noisy communications system,’ he claimed. The two lines of inquiry ‘were so close together you couldnโ€™t separate them.'”

When we try to communicate over a noisy channelโ€”a noisy radio or a crowded roomโ€”we are fighting entropy. We want our signal to survive the chaos so we can be heard.

When we encrypt a message, however, we are deliberately weaponizing that same chaos. We wrap our signal in artificial noise so dense that only the intended recipient possesses the mathematical filter to extract it.

It is a profound symmetry: clarity and obscurity are merely two ends of the exact same thing.

Today, one of our most advanced AI models is named “Claude” in tribute to Shannon. These neural networks are, at their core, sophisticated engines for separating signal from noise. They ingest the vast, chaotic, and often contradictory static of human knowledge and attempt to synthesize clarity and connection from it. They are mathematical mirrors reflecting Shannon’s earliest theories back at us.

But Shannon’s realization reflects something deeper about the human condition, far beyond the realm of zeroes and ones. We are all walking communications systems, constantly modulating our signals. Every day, we navigate an overwhelming digital landscape filled with deafening static.

Sometimes we desperately want the noise to clear so our true selves can be seen. Other times, we retreat behind a wall of our own generated staticโ€”small talk, busyness, deflection, and carefully curated avatarsโ€”to protect our inner world from being decoded by those who haven’t earned the key.

Perhaps the real wisdom of information theory isn’t just in knowing how to efficiently transmit a message, but in recognizing the sheer necessity of the noise itself. Without the static, the signal holds no meaning. Without the capacity for secrecy and privacy, the choice to be vulnerable and communicate clearly wouldn’t be nearly as profound.

It seems that we are defined as much by what we choose to encrypt as by what we choose to broadcast. Mirror indeed.

Categories
AI Work

The Dealers of Intelligence

Thereโ€™s a scene early in John Kenneth Galbraithโ€™s The Affluent Society where he describes Americans of an earlier era regarding industrial output with something close to reverence โ€” the sheer productive capacity of the nation seemed almost miraculous, a force that could reshape civilization. Within a generation, of course, that same output had become background noise. Factories hummed, goods appeared, and nobody paused to marvel.

The miraculous had become mundane, and the mundane had become infrastructure.

I found myself thinking about that arc recently while listening to Sam Lessin on the More or Less podcast.

Lessin made an observation that I havenโ€™t been able to shake: we probably arenโ€™t heading toward a single, triumphant AGI monopoly โ€” some god-machine that one fortunate company builds first and then rents to the rest of us in perpetuity.

Instead, Lessin suggested, we are barreling toward something far more ordinary, and in its ordinariness, far more interesting.

โ€œThere will be lots of โ€˜dealers of intelligenceโ€™. No one company will corner the market, no one big winner of AGI.โ€

Dealers of intelligence. I keep turning that phrase over. Where do we end up? No rapture, no singularity, no chosen company ascending to the throne of cognition. Just suppliers, distribution channels, price competition โ€” the unglamorous mechanics of any maturing market.

And historically, thatโ€™s exactly how this tends to go.

Salt was once precious enough to pay soldiers with. Spices rewrote the map of the world. Steel, oil, and computing power each arrived wrapped in mystique and guarded behind scarcity before the inevitable happened: extraction improved, distribution scaled, and the miracle became a utility. Nobody thinks about the engineering marvel of the electrical grid when they flip a light switch. They just expect the light to come on.

If Lessin is right โ€” and the competitive landscape of the last two years does little to argue against him โ€” intelligence will follow the same curve. Not a single oracle, but a market. Cognitive utilities. Price-per-token negotiations. The same forces that commoditized bandwidth will commoditize reasoning, and weโ€™ll argue about our AI subscription tiers the way we currently argue about our data plans.

Which forces the interesting question: when genius is cheap, what exactly becomes valuable?

The professional moats of the last century were largely built on the ability to process specialized information and output reliable answers.

The doctor, the lawyer, the financial analyst, the programmer โ€” each occupied a protected position because access to their domain of reasoning was genuinely scarce.

If I can buy a substantial fraction of that reasoning from a commodity supplier for fractions of a cent, the premium on raw cognitive horsepower doesnโ€™t just shrink. It collapses.

Whatโ€™s left, I think, is the un-commoditizable. Empathy. Physical presence. Judgment under conditions of genuine uncertainty and consequence. And above all โ€” taste.

Taste is the thing that has always resisted systematization, because taste isnโ€™t rational in any clean sense. Itโ€™s the residue of lived experience, of specific childhoods and particular failures and the accumulated weight of caring about things over time.

An algorithm can produce a structurally flawless piece of music; it takes a human to decide whether it matters, and why, and to whom.

That act of curation โ€” of choosing what deserves to exist and what doesnโ€™t โ€” is going to become more consequential, not less, as the supply of technically competent output explodes.

Thereโ€™s something almost liberating about this, if you let yourself sit with it.

A world of commoditized intelligence is, paradoxically, a profoundly human one. It removes the burden of raw computation from the center of what we do and pushes us toward the edges โ€” toward the questions only we can ask, the connections only we can feel, the decisions only we can be held accountable for.

The dealers of intelligence will handle the materials. Weโ€™ll still have to decide what to build. Architects.


Questions to Consider

  1. If intelligence becomes a commodity like electricity or bandwidth, which industries or professions will be slowest to feel that pressure โ€” and why?
  2. Lessin frames this as a market with many suppliers rather than a winner-take-all race. Does the competitive landscape today support that view, or does it still look like a sprint toward consolidation?
  3. What does โ€œtasteโ€ actually mean when the person exercising it is doing so with AI-augmented perception and judgment? Is it still the same thing?
  4. Who gets to haggle with the dealers? If cognitive utilities are cheap in aggregate but not universally accessible, does commoditization risk deepening inequality rather than democratizing thought?
  5. If the value of answering questions falls and the value of asking them rises, what does education need to look like โ€” and how far is it from what it looks like now?
Categories
AI AI: Large Language Models

The Echo Effect: Why Prompt Repetition is AI’s Best Kept Secret

In our relentless pursuit of complexity, we often overlook the elegant simplicity of a fundamental human habit: repeating ourselves.

We build colossal architectures, weave intricate neural networks, and throw mountains of computational power at our artificial intelligence systems, hoping to squeeze out a few more drops of reasoning and logic. Yet, sometimes the most profound breakthroughs require no new code, no additional latency, and no extra training data.

Sometimes, you just have to say it twice.

In a fascinating December 2025 paper titled Prompt Repetition Improves Non-Reasoning LLMs,” researchers Yaniv Leviathan, Matan Kalman, and Yossi Matias uncovered an almost absurdly simple “free lunch” in AI optimization.

Their premise is straightforward: when you aren’t using a heavy reasoning model, simply copying and pasting your input prompt multiple times significantly boosts the model’s performance.

“When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency.”

The mechanics behind this are elegantly pragmatic.

By repeating the prompt, you are moving the heavy computational lifting to the parallelizable “pre-fill” stage of the model’s processing. The AI’s causal attention mechanism gets to process the same tokens again, allowing the later iterations of the prompt to attend to the earlier ones. It effectively acts as a hack to simulate bidirectional attention in a decoder-only architecture.

What’s even more telling is the paper’s observation on why this works so well.

The researchers noted that models trained with Reinforcement Learning (like OpenAI’s deep-thinking variants) naturally learn to “restate the problem” in their internal monologue. They figured out on their own what these researchers are suggesting we do manually: repeat the question to focus the mind.

Reading this paper, I couldn’t help but draw a parallel to the human condition and the nature of listening.

How often do we assume that because we have articulated a thought once, it has been fully absorbed? We fire off a single, dense instruction to a colleague, a partner, or a friend, and then marvel when the nuance is lost in translation.

We suffer from our own attention bottlenecks.

Like a non-reasoning LLM trying to parse a complex query in a single pass, we are constantly bombarded with a stream of tokensโ€”emails, notifications, conversations, fleeting thoughts. To truly understand, to truly digest and synthesize information, we need the grace of repetition.

There is a strange poetry in the fact that to make our most advanced digital minds smarter, we have to talk to them the way we talk to a distracted child or a busy spouse. The “microscope effect” highlighted in the studyโ€”where repeating a prompt drastically improved extraction tasksโ€”shows that the failure wasn’t in the model’s capacity to know, but in its capacity to focus. Repetition forces focus. It creates a resonant echo in the context window, a digital highlighter that screams, โ€œThis matters. Look here again.โ€

As we continue to navigate a world increasingly augmented by artificial intelligence, this paper serves as a humbling reminder. The bleeding edge of technology isn’t always found in the most complex equation; sometimes, it’s hidden in the most basic principles of communication.

Whether you’re prompting a billion-parameter language model or trying to connect with the human sitting across from you, the lesson is clear.

Clarity isn’t just about the words you choose. It’s about giving those words the space, the resonance, and the repetition they need to be truly understood.

Say it once to be heard; say it twice to be understood.

Categories
AI

The Thousandfold Door

There is a pattern hiding in the history of human progress that we almost always miss in the moment โ€” and almost always recognize, with some embarrassment, in hindsight.

Richard Koch and Greg Lockwood called it price-simplifying. The insight, drawn from decades of studying transformative businesses, is deceptively simple: when you cut the price of something dramatically, demand doesnโ€™t respond proportionally. It responds exponentially. Halve the price, and you donโ€™t double the market. You might multiply it by ten, or a hundred, or a thousand. Reduce the price to a tenth of what it was, and you may unlock a market a hundred thousand times larger than the one that existed before.

The math sounds implausible until you start listing the examples. Henry Ford didnโ€™t just make cars cheaper โ€” he conjured an entirely new civilization of mobility. Ikea didnโ€™t discount furniture โ€” it democratized the designed home. Southwest Airlines didnโ€™t offer cheaper seats โ€” it invented the era of the spontaneous trip, transforming flying from an executive luxury into something a college student books on a whim.

In every case, the price drop didnโ€™t just serve existing demand more cheaply. It revealed latent demand that nobody knew existed โ€” desire that had been sitting dormant, waiting for the door to open.

I keep returning to this framework when I think about what is happening with intelligence right now.

For most of human history, access to high-quality thinking โ€” legal analysis, financial modeling, medical reasoning, strategic advice, elegant writing โ€” has been extraordinarily expensive. Not just in money, but in time. You needed years of specialized education, or the budget to hire someone who had it. The price of cognition was high enough that vast swaths of human need simply went unmet. Problems went unsolved not because solutions didnโ€™t exist, but because the expertise required to find them was priced out of reach.

AI is a price-simplifying event for intelligence itself.

โ€œIf the price is halved, demand does not double. It increases fivefold, tenfold, a hundredfold, a thousandfold or more.โ€

We are currently debating AI as though the primary story is substitution โ€” one form of labor replacing another. But Koch and Lockwoodโ€™s framework suggests the more consequential story is what happens on the other side of the price collapse. When the cost of a legal opinion drops from $500 an hour to nearly zero, the question isnโ€™t just โ€œwhat happens to lawyers?โ€ Itโ€™s โ€œhow many people who never could afford a lawyer now get access to one?โ€ When the cost of a business plan drops from a consultantโ€™s retainer to an afternoon conversation, the question isnโ€™t just โ€œwhat happens to consultants?โ€ Itโ€™s โ€œhow many ideas that never got funded now have a fighting chance?โ€

The thousandfold door is opening. We can see it in the aggregate usage numbers, in the explosion of one-person companies, in the PhD-level tutoring now available to a student in a country that couldnโ€™t previously afford it. What we cannot yet see is the full shape of what walks through.

Thatโ€™s the thing about exponential demand. It doesnโ€™t announce itself. It just accumulates quietly, and then one day someone looks at the numbers and realizes the world has changed.

Questions to Consider

  1. The Latent Demand Question: What human needs โ€” currently unmet because expert help is too expensive โ€” do you think AI will unlock first? Where is the largest reservoir of suppressed demand?
  2. The Ford Parallel: Henry Fordโ€™s price simplification didnโ€™t just create a new industry โ€” it reshaped cities, suburbs, culture, and geopolitics in ways he never anticipated. What are the second and third-order consequences of dramatically cheaper intelligence that weโ€™re not yet taking seriously?
  3. The Distribution Problem: Price-simplifying events historically donโ€™t distribute their benefits evenly โ€” early advantages tend to compound. Who is best positioned to walk through the thousandfold door first, and does that concern you?
  4. The Demand We Canโ€™t Imagine: Koch and Lockwoodโ€™s most unsettling point is that the new demand often didnโ€™t previously exist in any visible form โ€” it was created by the price drop itself. What entirely new human behaviors, industries, or creative forms might AIโ€™s price simplification call into existence that we currently have no framework to anticipate?
Categories
AI History Work

Flash-Frozen Cognition: Birdseye, AI, and the Future of Work

I was listening recently to a conversation between Liz Thomas, Tom Lee, and Michael Lewis โ€” the kind of wide-ranging dialogue where a single offhand story can suddenly anchor everything that’s been swirling loosely in your mind.

Tom’s story was about the 1930s, the weight of the Great Depression, and a man named Clarence Birdseye.

Birdseye had watched the Inuit fish in the brutal cold of Labrador and noticed something the rest of the world had missed: fish frozen instantly at sub-zero temperatures tasted perfectly fresh when thawed. The ice crystals formed too quickly to rupture the cellular walls of the flesh. He took that observation home, patented the process, and introduced the world to flash freezing.

On the surface, he had simply figured out a better way to keep peas green and fish edible. What he had actually done was detonate a quiet economic bomb.

Before Birdseye, entire ecosystems of seasonal labor existed to preserve, salt, can, and rush perishable goods to market before they rotted. When flash freezing arrived, those jobs didn’t evolve โ€” they vanished. The ice harvesters, the seasonal canners, the local preservationists all felt the sudden, biting frost of obsolescence. The cold came fast, and it was indifferent.

Yet zoom out on the timeline, and a different picture emerges entirely. Flash freezing didn’t just kill jobs โ€” it invented new ones that nobody could have anticipated. It necessitated refrigerated trucking. It transformed the grocery store, conjuring the frozen food aisle from nothing. It reshaped the home appliance industry, making the household freezer a fixture of modern life. Most profoundly, it decoupled humanity from the harsh dictates of the harvest season, democratizing access to nutrition across geographies and income levels that had never known that kind of abundance.

The destruction was visible and immediate. The creation was invisible and slow โ€” and vastly larger.

Listening to Tom tell this story, I couldn’t help but see our own reflection in it.

Right now, we are all hyper-focused on the ice harvesters of the cognitive economy. We look at AI โ€” large language models, generative tools, automated reasoning โ€” and we see the rupture. We mourn the entry-level analyst, the copywriter, the junior coder. The anxiety is real. The displacement is real. The cold is real.

But what we are struggling to visualize is the refrigerated trucking of the mind.

“AI is flash-freezing cognition. It is taking tasks that used to rot if not attended to immediately by expensive, time-consuming human effort, and preserving them in a scalable, frictionless state.”

When intelligence and execution can be flash-frozen and shipped anywhere instantly โ€” to a first-generation entrepreneur in rural India, to a solo founder with no budget for consultants, to a teacher in a school that can’t afford specialists โ€” what new aisles get built in the supermarket of human endeavor?

The honest answer is that we don’t know. The Inuit fishermen of Labrador couldn’t have imagined the frozen pizza aisle. The ice harvesters of the 1930s couldn’t have pictured the cold chain logistics industry that employs millions today. We are standing in their moment, watching the ice form, mourning the harvest โ€” and almost certainly underestimating what comes next.

The true impact of AI won’t be measured in the jobs it automates. It will be measured in the industries, creative liberties, and human possibilities that emerge because we no longer have to spend all our energy just keeping the ideas from spoiling.

Questions to Consider

  1. The Invisible Creation: Flash freezing’s job creation vastly outpaced its job destruction โ€” but only over decades. How long are we willing to hold that faith with AI, and what do we owe the people displaced in the interim?
  2. The Democratization Dividend: Birdseye’s invention ultimately made fresh nutrition available to people who never had it. Who are the equivalent beneficiaries of flash-frozen cognition โ€” and are we building the infrastructure to actually reach them?
  3. The Harvest Season Question: We’ve always structured education, careers, and institutions around the assumption that expertise is scarce and slow to develop. What breaks โ€” and what gets liberated โ€” when that assumption stops being true?
  4. The Indifference Problem: The cold that killed the ice harvesters’ livelihoods was indifferent to their suffering. Is there anything about AI disruption that is meaningfully different from previous waves of technological displacement โ€” or are we simply the latest generation to stand in that frost?

Categories
AI Cybersecurity

The Locksmith and the Ghost

For over two decades, some of the most sophisticated human minds in computer security โ€” backed by Googleโ€™s project teams, millions of hours of automated fuzzing, and countless independent audits โ€” stared at the same stretch of code. They were looking for flaws in OpenSSL, the cryptographic library that quietly underpins much of the internetโ€™s security infrastructure. HTTPS connections, digital certificates, encrypted communications โ€” OpenSSL is the invisible foundation beneath an enormous amount of what we trust online.

They didnโ€™t find them. An AI did.

In Januaryโ€™s OpenSSL security release, twelve new zero-day vulnerabilities were disclosed โ€” all twelve discovered by a single AI-driven research system called AISLE. Three of the bugs had been sitting in the code since 1998. One predated OpenSSL itself, inherited from Eric Youngโ€™s original SSLeay implementation in the 1990s. In five cases, the AI didnโ€™t just find the flaw โ€” it proposed the patch that was accepted into the official release.

Bruce Schneier, who has been writing about security longer than most of todayโ€™s AI researchers have been alive, offered a typically understated verdict: โ€œAI vulnerability finding is changing cybersecurity, faster than expected.โ€

That last phrase โ€” faster than expected โ€” is doing a lot of work.

โ€œThis is a historically unusual concentration for any single research team, let alone an AI-driven one.โ€

What makes this story so arresting isnโ€™t just the number twelve. Itโ€™s the age of what was found. A vulnerability that has survived twenty-five years of intense human scrutiny isnโ€™t a simple oversight โ€” itโ€™s a ghost. It exists in a blind spot so deeply embedded in how human experts approach a problem that generation after generation of reviewers walked right past it.

AI doesnโ€™t share our blind spots. It doesnโ€™t get bored at line 4,000 of a C source file. It doesnโ€™t carry the cognitive shortcuts that make experienced engineers efficient โ€” and occasionally, selectively blind. It looks at the same code with fundamentally different eyes.

This is both the promise and the peril. Schneier notes, with characteristic precision, that this capability will be used by both offense and defense. The same system that finds vulnerabilities to patch them can, in other hands, find vulnerabilities to exploit them. The locksmithโ€™s art has always had this dual nature. What changes now is the speed, the scale, and the fact that the locksmith no longer needs to sleep.

We are entering a period where the security of the infrastructure we depend on โ€” the quiet plumbing of the digital world โ€” will increasingly be determined by an AI arms race happening largely out of sight. The ghosts hiding in legacy code are being found. The question is who finds them first, and what they do next.

Questions to Consider

  1. The Blind Spot Problem: If AI can find vulnerabilities that decades of human expertise missed, what does that imply about other domains where we rely on accumulated expert consensus โ€” medicine, law, financial risk modeling?
  2. Offense and Defense: The same capability that patches vulnerabilities can be weaponized to exploit them. How do we think about governing AI security research tools before the asymmetry tips decisively in one direction?
  3. The Legacy Code Crisis: Billions of lines of code written in the 1990s and early 2000s power critical infrastructure today. If AI can systematically audit that code, should there be a coordinated global effort to do so โ€” and who would organize it?
  4. Trust and Verification: When an AI proposes a patch to a critical security flaw and human experts accept it, how confident are we that we understand why the patch works โ€” and that it doesnโ€™t introduce something new we canโ€™t see?

Categories
AI Business

The Moat Drains

There is an old metaphor in investing โ€” the โ€œmoat.โ€ Warren Buffett popularized it: the idea that the best businesses are castles surrounded by deep, wide moats that keep competitors at bay.

For the past two decades, enterprise software companies built some of the most impressive moats in the history of capitalism. Sticky customers. Multi-year contracts. Switching costs so high that even dissatisfied clients stayed put. The moat wasnโ€™t just deep โ€” it was filled with concrete.

This morning, JP Morganโ€™s equity research team quietly suggested the concrete may be cracking. See also this recent Substack post by Jordi Visser.

In a note lowering price targets across their software coverage, the bank cited a striking phrase: โ€œthe exponential pace of AI proliferation raises doubts about competitive moats and the defensibility of software companies.โ€

Theyโ€™re not alone in thinking this. But thereโ€™s something significant about seeing it written in the careful, hedged language of a major Wall Street research report.

When the analysts who model ten-year discounted cash flows start abandoning that framework โ€” replacing it with simpler one- and two-year profitability multiples โ€” itโ€™s a signal worth decoding.

The shift in valuation methodology is itself the story. DCF analysis โ€” the gold standard of software valuation for a generation โ€” requires confidence in a companyโ€™s earnings trajectory over many years.

JP Morgan is saying, plainly, that they no longer have that confidence. The window of visibility has collapsed. When you canโ€™t see more than a year or two out, you stop pretending you can.

โ€œInvestors are less comfortable underwriting defensive growth over multi-year periods.โ€

Whatโ€™s driving this?

The suspicion โ€” increasingly well-founded โ€” that AI is not just a feature to be added to existing software products, but a force that restructures the value chain entirely.

If an AI agent can perform the function that previously required a $50,000-per-year SaaS subscription, the moat doesnโ€™t just shrink. It evaporates. The castle becomes a historical curiosity.

Vertical software stocks โ€” the specialized platforms serving specific industries like healthcare, construction, or legal โ€” currently trade at 10 to 25 times EBITDA, according to the note. The S&P 500 as a whole trades at 15 times. The message embedded in those numbers is sobering: many of these once-premium businesses are being re-rated toward commodity valuations, and some may not have found their floor yet.

JP Morganโ€™s preferred companies in this environment are those with upside to 2026 revenue estimates and those they view as โ€œdefensive to AI proliferation.โ€ That second phrase is the one I find myself turning over. It implies a new taxonomy is forming in the market โ€” not growth vs. value, not cyclical vs. defensive, but AI-vulnerable vs. AI-resistant. Thatโ€™s a categorization that didnโ€™t meaningfully exist three years ago.

The moat metaphor may need an update. In the age of AI, the question is no longer how wide the moat is. Itโ€™s whether the castle itself still needs to exist.

Questions to Consider

  1. The Moat Inventory: If you were a software CEO this morning, which parts of your product would you genuinely consider defensible against AI substitution โ€” and which would you privately admit are vulnerable?
  2. The Valuation Signal: When Wall Street abandons long-term DCF models in favor of near-term multiples, is that a temporary adjustment to uncertainty โ€” or a permanent reset in how software businesses will be valued going forward?
  3. The New Taxonomy: JP Morgan implicitly divides the software world into AI-vulnerable and AI-resistant. What characteristics do you think actually define that divide โ€” and can a company move from one category to the other?
  4. The Buffett Test: Buffettโ€™s moat metaphor was built for a world of slow-moving competitive forces. Is the concept still useful in an era of exponential technology change, or do we need a new mental model entirely?
  5. The Timing Question: Is this re-rating of software companies a rational early response to a real structural shift โ€” or is Wall Street, as it often does, overcorrecting in the short term for a change that will take much longer to fully materialize?