Categories
AI Work

The Digital Beast of Burden

A friend of mine recently cut through the noise of the current AI discourse with a comment that felt surprisingly grounding. We were talking about the breathless predictions of AGI—superintelligence, sentient machines, the technological singularity—when he shrugged and said, “I don’t need any of that. I just want AI to do the donkey work.”

He wasn’t asking for a god in the machine; he was asking for a better tractor. He didn’t want a synthetic philosopher to debate the meaning of life; he wanted the next evolution of “Claude Cowork”—a reliable, tireless entity to handle the drudgery so he could get back to the actual business of thinking.

There is something profound in that phrase: donkey work. It evokes the image of the beast of burden—the creature that carries the heavy packs up the mountain so the traveler can focus on the path and the view. For thousands of years, humans have sought tools to offload physical exertion. We domesticated animals, we built water wheels, we invented the steam engine. We outsourced the calorie-burning, back-breaking labor to preserve our bodies.

“The ‘donkey work’ of the information age isn’t hauling stone; it is the cognitive load of bureaucracy, formatting, sorting, scheduling, and synthesizing endless streams of data.”

Now, we are looking to preserve our minds.

The friction that exists between having an idea and executing it is often composed entirely of this “donkey work.” When my friend says he wants AI for this, he isn’t being lazy. He is expressing a desire to reclaim his cognitive bandwidth.

There is a fear that if we hand over these tasks, we become less capable. But I suspect the opposite is true. If you are no longer exhausted by the logistics of your work, you are free to be consumed by the meaning of it.

We often talk about AI as if it’s destined to replace the artist or the architect. But the most valuable version of this technology might just be the humble assistant—the digital mule that quietly processes the mundane in the background. It’s the difference between a tool that tries to be you, and a tool that helps you be you.

We don’t need AGI to solve the human condition. We just need the “donkey work” handled so we have the time and energy to experience it.

What do you think?

  1. Is there a danger that in handing over the “donkey work,” we accidentally hand over the friction required to build mastery?
  2. If your daily cognitive load dropped by 50% tomorrow, would you actually use that space for “higher thinking,” or would you just fill it with more noise?
  3. Where exactly is the line between “drudgery” and the “process”—and are we risking erasing the latter to solve the former?
Categories
AI Anthropic Claude Cybersecurity

The End of Obscurity

There is a particular kind of silence that surrounds a zero-day vulnerability. It is the silence of something waiting—a flaw in the logic, a gap in the armor, sitting unnoticed in the codebase for years, perhaps decades. We have slept soundly while these digital fault lines ran beneath our feet, largely because we assumed that finding them required a brute force that no one possessed, or a level of human genius that is incredibly rare.

But the silence is breaking.

I was reading Anthropic’s Red Team report from earlier this week (triggered by reading Bruce Schneier’s amazement), specifically their findings on the new Opus 4.6 model. The technical details are impressive, but the philosophical implication is what stopped me, like Bruce, cold.

For years, digital security has relied on “fuzzers”—programs that throw millions of random inputs at a system, banging on the doors to see if one accidentally opens. It is a noisy, chaotic, brute-force approach.

The new reality is different. As the report notes:

“Opus 4.6 reads and reasons about code the way a human researcher would—looking at past fixes to find similar bugs that weren’t addressed, spotting patterns that tend to cause problems.”

This is a fundamental phase shift. We are moving from the era of the Battering Ram to the era of the Jeweler’s Loupe. The machine is no longer guessing; it is understanding.

There is something deeply humbling, and slightly terrifying, about this. We have spent the last half-century building a digital civilization on top of code that we believed was “secure enough” because it had survived the test of time. We trusted the friction of complexity and the visibility of open source to keep us safe. We assumed that if a bug had existed in a core library for twenty years, surely it would have been found by now.

But the AI doesn’t care about time. It doesn’t get tired. It doesn’t have “developer bias” that assumes a certain function is safe because “that’s how we’ve always done it.” It simply looks at the structure, reasons through the logic, and points out the crack in the foundation that we’ve been walking over every day.

We are entering a period of forced transparency. The “security by obscurity” that held the internet together is evaporating. When intelligence becomes commoditized, vulnerabilities become commodities too. The question is no longer “is my code secure?” but rather, “what happens when the machine sees the flaws I cannot?”

It’s a reminder that complexity is a loan we take out against the future. Eventually, the bill comes due. We are just lucky that, for now, the entity collecting the debt is one we built ourselves, designed to tell us where the cracks are before the ceiling collapses. Let’s hope that we are out far enough in front of it.

Categories
AI Living Productivity

The Reality Gap

“I follow AI adoption pretty closely, and I have never seen such a yawning inside/outside gap. People in SF are putting multi-agent claudeswarms in charge of their lives… people elsewhere are still trying to get approval to use Copilot in Teams.” — Kevin Roose

There is a specific kind of vertigo that comes from scrolling through the “Inside” of the AI bubble while the rest of the world simply goes to work. It is the dizziness of watching a new species of behavior emerge—”wireheading” and “claudeswarms”—while the vast majority of the economy is still asking for permission to use a spellchecker.

The future isn’t just unevenly distributed; it is becoming mutually unintelligible.

Roose notes a “yawning inside/outside gap” that feels distinct from previous tech cycles. In one reality—geographically centered in San Francisco and digitally centered in specific discords—people are operating with a level of agency only sci-fi writers dared to imagine. They are deploying multi-agent swarms to manage their lives and consulting large language models for existential guidance.

In the other reality—the one inhabited by the vast majority of the global workforce—people are still waiting for an IT ticket to clear so they can use a basic productivity assistant.

It is tempting to look at this divide solely through the lens of technical access, but Roose hits on a deeper truth: “there seems to be a cultural takeoff happening in addition to the technical one.”

This is the friction of our current moment. It is not just that the tools are different; the permissions we give ourselves to use them are different. The “Inside” is operating with a mindset of radical experimentation and integration. The “Outside” is operating within legacy frameworks of risk mitigation and bureaucratic approval.

The danger of this gap isn’t just economic inequality, though that is a guaranteed downstream effect. The immediate danger is a loss of shared context. When the creators of technology live in a reality where “claudeswarms” run the day, they risk losing the ability to design for, or even empathize with, a world that is still fighting for permission to use the tools at all.

We are living in the same year, but we are no longer inhabiting the same time. The challenge for those of us on the “Inside” is to resist the intoxication of the bubble long enough to build bridges, rather than just building faster escape pods.

Meanwhile, in China (from the Financial Times)…

“I’ve witnessed first hand how China has grown from having zero AI talent 20 years ago to mass producing them,” he said. “Some of our most cutting-edge work is now done by fresh graduates. The real geniuses to change the world soon could well be among them.”

Categories
AI AI: Large Language Models

The Texture of Autonomy

There is a distinct texture to working with a truly capable person. It is a feeling of relief, specific and profound.

When you hand a project to a junior employee who “gets it,” the mental load doesn’t just decrease; it vanishes. You don’t have to map the territory for them. You don’t have to pre-visualize every stumble or correct every navigational error. You simply point to the destination, and they find their way.

I was thinking about this feeling—this specific brand of professional trust—when I read a recent observation from two partners at Sequoia regarding the current state of Artificial Intelligence:

“Generally intelligent people can work autonomously for hours at a time, making and fixing their mistakes and figuring out what to do next without being told. Generally intelligent agents can do the same thing. This is new.”

The phrase that sticks with me is “without being told.”

For the last forty years, our relationship with computers has been strictly transactional. The computer waits. We command. It executes. Even the most sophisticated algorithms have essentially been waiting for us to hit “Enter.” They are tools, no different in spirit than a very fast abacus or a hyper-efficient typewriter.

But we are crossing a threshold where the software stops waiting.

The definition of intelligence in a workspace isn’t just raw processing power; it is the ability to recover from failure without supervision. It is the capacity to run into a wall, realize you have hit a wall, back up, and look for a door—all while the manager is asleep or working on something else.

When Sequoia notes that “this is new,” they aren’t talking about a feature update. They are talking about a shift in the ontology of our tools. We are moving from an era of leverage (tools that make us faster) to an era of agency (tools that act on our behalf).

This changes the psychological contract between human and machine. If an agent can “figure out what to do next,” we are no longer operators; we are managers. And as anyone who has transitioned from individual contributor to management knows, that is a fundamentally different skill set. It requires clearer intent, better goal-setting, and the ability to trust a process you cannot entirely see.

We are about to find out what it feels like to have a digital colleague that doesn’t just listen, but actually thinks about the next step.