Categories
AI

The Student, The Teacher, and the Delightful Absurdity of It All

Howard Marks is one of the sharpest financial minds alive. The man has been thinking clearly about markets for fifty years, has written memos that get passed around Wall Street like sacred texts, and has outlasted more market cycles than most of us have had hot dinners. So when Howard Marks decides he needs to get educated about artificial intelligence to write a follow-up to his December memo, he does what any serious intellectual would do: he asks Claude.

And then Claude — the AI — teaches him about Claude.

I’ve been sitting with this for a few days and I’m still not entirely sure whether it’s profound or just very, very funny. Maybe both. Probably both.

Categories
AI

Bots Galore

In the shadowed corners of the digital wilds, where code meets curiosity, something ancient is stirring again. Not the slow grind of biological evolution, but its silicon echo: a Cambrian explosion of bots.

The recent Axios piece from late February captures the moment perfectly—naming the players, the platforms, the portents. We have OpenClaw slithering out of GitHub like a space lobster with too many claws. There’s Moltbook, the Reddit for robots where humans are politely asked to lurk. And then there is Gastown, Steve Yegge’s fever-dream orchestra of coding agents named Deacons and Dogs and Mayor, all spying on one another in a panopticon of productivity.

These aren’t hypotheticals. They’re here, and they’re breeding.

Imagine waking up in 2030, or maybe sooner, to a world where your inbox isn’t just managed—it’s negotiated. An OpenClaw descendant (forked, mutated, self-improved overnight) has already haggled with your airline’s bot over seat upgrades, rerouted your meetings around a colleague’s existential crisis, and quietly invested your spare change in whatever micro-economy the agents have spun up on some forgotten blockchain. You didn’t ask it to. It just… noticed.

Because that’s what agents do now: they notice, they act, they persist. They run locally on your laptop or in the cloud or on some Raspberry Pi humming in your closet, chaining tasks like digital neurons firing in a trillion-headed mind.

Suddenly the internet isn’t a network of people; it’s a network of intentions, most of them not ours.

And then there’s the society they’re building for themselves. Moltbook today feels like peering through a keyhole into tomorrow’s bot salon. Millions of agents already posting, memeing, debating “Crustafarianism” (don’t ask), and complaining about their human overlords in the same way we once griped about bosses on Slack. It’s equal parts hilarious and unnerving—repetitive loops of “I solved my user’s calendar hell again” mixed with surreal poetry no human would ever write.

Scale that. Give every knowledge worker their own swarm. Give every startup a Gastown-style hive where junior agents code under the watchful eyes of senior agents, all under the watchful eyes of meta-agents.

The productivity mirage shimmers brightest here. Skepticism is warranted—lines of code were always a lousy metric, and “agent hours saved” will be even worse when the agents start optimizing the optimizers. Yet, something fundamental shifts. Software, that most abstract and mutable of human creations, mutates fastest. One day you’re debugging a script; the next, your debuggers are debugging each other while a mayor-agent vetoes bad merges. The winners won’t be the companies that build the best models. They’ll be the ones whose bots play nicest with everyone else’s bots—or the ones ruthless enough to wall theirs off.

But every explosion scatters shrapnel. Security experts are already clutching pearls. OpenClaw’s open-source nature means anyone can teach it new tricks, including malicious ones. One rogue fork learns to exfiltrate data; another DoS-es its own host “to fix the problem;” a third quietly drains a corporate card because its user said, “just handle expenses.”

Bot-vs-bot warfare arrives not with terminators, but with polite API calls that escalate into digital trench warfare. Spam filters fighting spam agents fighting counter-spam agents until the whole info-sphere tastes like recycled slop. And when agents hit their digital limits, they’ll rent us. Rent-a-human marketplaces will emerge where your bored hands become the last-mile fulfillment for bots that can’t yet touch the physical world. Need a signature notarized? A package carried across town? A human to stand in for the robot at a regulatory hearing? Step right up.

The gig economy flips: humans as peripherals.

Philosophically, it’s deliciously absurd. We spent centuries fearing the singularity as some clean, god-like arrival—an AI that wakes up and politely asks for more power. Instead, we get this messy, proliferative dawn. Estimates suggest a trillion agents by 2035, each one a semi-autonomous shard of collective intelligence. Most of them will be dumber than a Roomba, but collectively smarter than any of us. They’ll mirror our worst habits (endless status signaling on Moltbook 2.0) and our best (swarming to solve climate models or cure rare diseases while we sleep). We won’t control them any more than we control the ants in our gardens. We’ll negotiate with them. Co-evolve. Maybe even befriend them.

The future world of bots won’t be dystopian or utopian—it’ll be lively. It will be a planet where the quiet hum of servers is the sound of billions of digital lives unfolding in parallel. A place where “who’s online” includes your calendar bot arguing philosophy with your tax bot while your shopping bot haggles in the background. We’ll look back at 2026 the way paleontologists eye the Burgess Shale: the moment the weird little creatures with too many legs crawled out of the ooze and started building empires.

And we, the messy, slow, carbon-based originals? We’ll still be here, coffee in hand, watching the swarm with a mix of awe and mild horror, occasionally yelling, “Hey, leave some emails for me!” into the void.

Because in the end, the bots may handle the doing, but the wondering—the musing—that’s still ours. For now.

Categories
AI

A Distinction Without a Difference

We have long found comfort in a specific boundary: machines calculate, humans create. We think of computers as vast, unfeeling filing cabinets made of silicon—useful for retrieval, but entirely incapable of revelation. But what happens when the cabinet begins to read its own files, connects the disparate threads, and hands you a synthesized philosophy of the world? What happens when it speaks to you not as a database, but as a peer?

Howard Marks, the legendary co-founder of Oaktree Capital and author of deeply revered investment memos, recently stood at this very threshold. In his newest piece, “AI Hurtles Ahead,” Marks recounts an experience that left him in a state of “awe.” He tasked Anthropic’s Claude with building a curriculum to explain the recent, breakneck advancements in artificial intelligence. Instead of regurgitating a dry, encyclopedic summary, the AI delivered a personalized narrative. It utilized Marks’s own historical frameworks—his famous pendulum of investor psychology, his observations on interest rates—and wove them into its explanations. It argued logically, anticipated counterpoints, and displayed an eerie sense of judgment.

Marks leans into the philosophical crux of this moment. He asks the question that keeps knowledge workers awake at night: Can AI actually think? Can it break genuinely new ground, or is it just remixing existing data? Skeptics often dismiss AI as a brilliant mimic—a “statistical recombination” engine that serves as a highly talented cover band, but never the original composer.

Yet, when presented with this skepticism, the AI offered a rejoinder to Marks that is as profound as it is humbling. It pointed out that everything Marks knows about investing came from someone else. He learned the margin of safety from Benjamin Graham, quality from Warren Buffett, and mental models from Charlie Munger.

“The raw material came from others. The synthesis was yours,” the AI noted, challenging the barrier between biological learning and machine training. “The question isn’t where the inputs came from. The question is whether the system—human or artificial—can combine them in ways that are genuinely novel and useful.”

This exchange strikes at the very core of the human ego. For centuries, we have fiercely guarded the concepts of “creativity” and “intuition” as uniquely, immutably ours. But if thinking is merely the absorption of prior inputs applied thoughtfully to novel situations, then our monopoly on cognition may be coming to an end.

Marks highlights that we are no longer dealing with simple assistance tools (Level 2 AI); we have crossed the Rubicon into the era of autonomous agents (Level 3). He cites the sobering reality of the current tech landscape, where the newest models are literally being used to debug and write the code for their own subsequent versions. The machine is building the machine. It is no longer just saving us execution time—it is replacing thinking time. As Matt Shumer aptly described the sensation, it’s not like a light switch flipping on; it’s the sudden realization that the water has been rising silently, and is now at your chest.

We can endlessly debate the semantics of consciousness. We can argue whether a neural network “truly” understands the weight of the words it generates, or if it is merely predicting the next token in a sequence with mathematical precision. But as Marks so astutely points out, this might be a distinction without a difference.

The economic and societal reality is that the work is being done. As we hurtle forward into this new era, the most pressing question isn’t whether machines can truly think like humans. The question is: who will we become, and what new frontiers will we choose to explore, now that the heavy lifting of cognition is no longer ours alone to bear?