Categories
Living Productivity

The Architecture of Arete

In the modern landscape of productivity, we are drowning in “how-to” guides and “ten-step” frameworks. We treat our lives like machines that need oiling, rather than gardens that need tending. But David Sparks’ recent work on an updated productivity field guide brings back a much older, more grounded philosophy: the marriage of roles and arete. This is the third edition of his field guide with refinements that he’s made along the way.

To understand why this matters, we have to look at how we usually define ourselves. Most of us operate via a chaotic “to-do” list—a flat, untextured pile of tasks. “Buy milk” sits right next to “Finish the quarterly report,” which sits next to “Call Mom.” This flatness is where burnout lives. It lacks a sense of who we are being when we do those things.

“A role is not just a job title; it is a container for responsibility and relationship.”

This is where Roles come in. When we organize our lives by roles, we stop seeing tasks and start seeing stewardship. We aren’t just checking boxes; we are fulfilling a duty to the parts of our lives that actually matter. But roles alone can become burdensome—mere masks we wear—unless they are infused with arete.

The Greeks defined arete as “excellence” or “virtue,” but its deepest meaning is “acting up to one’s full potential.” It is the act of being the best version of a thing.

However, a warning from the 2026 guide: Do not treat Arete as a yardstick to beat yourself up with when you fall short. Instead, treat it as a compass bearing. You will never perfectly ‘reach’ North, but you can always check to ensure you are rowing in that direction . Success isn’t matching the ideal; it is simply making progress from who you were when you started .

When you combine a defined Role with the pursuit of arete, productivity shifts from a mechanical burden to a philosophical practice. You are no longer just “writing an email”; you are practicing the excellence of a “Clear Communicator.” You aren’t just “doing the dishes”; you are practicing the excellence of someone who “Values a Peaceful Environment.”

To keep these roles authentic, we must also identify their Shadow Roles. If your Arete is the ‘Present Father,’ you must recognize the Shadow Role of the ‘Distracted Dad’ who is physically in the room but mentally scrolling email. Identifying the shadow doesn’t make you a failure; it gives you the awareness to course-correct before you hit the rocks .

Implementing this requires what Sparks calls the Arete Radar. In a world demanding instant responses, we must cultivate a ‘meditative gap’—a pause between a request and our answer . In that gap, we ask a single question: ‘Does this commitment serve my Arete, or does it distract from it?‘. This turns the act of saying ‘no’ into a strategic ‘yes’ to your deeper purpose.

This framework rescues us from the “productivity for productivity’s sake” trap. It suggests that the goal isn’t to get more done, but to be more present and excellent in the specific seats we have chosen to occupy. In the end, we don’t need better apps. We need a better understanding of our station and the virtue required to fill it.

Finally, we must stop solving for speed and start solving for meaningfulness. Efficiency is the enemy of Arete internalization. Sparks suggests the ‘Blank Page Ritual’: rewriting your Arete statements from scratch every quarter rather than just editing an old file. This intentional slowness forces the values out of your computer’s storage and hard-codes them into your soul’s permanent memory .

Categories
AI Quotations

Something big is happening…

Sobering thoughts from Matt Schumer:

Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.

I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.

I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

Categories
AI

The Ghost in the Spreadsheet

There is a specific kind of quiet that descends when a tool finally disappears into the task. We saw it with the cloud—once a radical, debated concept of “someone else’s computer,” now merely the invisible oxygen of the internet. We saw it with Uber, moving from the existential dread of entering a stranger’s car to the thoughtless tap of a screen.

In a recent reflection, Om Malik captures this shift happening again, this time with the loud, often overbearing presence of Artificial Intelligence. For years, we have treated AI like a digital parlor trick or a demanding new guest that requires “prompt engineering” and constant supervision. But as Om notes, the real revolution isn’t found in the chatbots; it’s found in the spreadsheet.

“I wasn’t spending my time crafting elaborate prompts. I was just working. The intelligence was just hovering to help me. Right there, inside the workflow, simply augmenting what I was doing.”

This is the transition from “Frontier AI” to “Embedded Intelligence.” It is the moment technology stops being a destination and starts being a lens. When Om describes using Claude within Excel to model his spending, he isn’t “using AI”—he is just “doing his taxes,” only with a sharper set of eyes.

There is a profound humility in this shift. We are moving away from the “God-in-a-box” phase of AI and into the “Amanuensis” phase. It reminds me of the old craftsmanship of photography, another area Om touches upon. We used to carry a bag full of glass lenses to compensate for the limitations of light and distance. Now, a fixed lens and a bit of intelligent upscaling do the work. The “work” hasn’t changed—the vision of the photographer remains the soul of the image—but the friction has evaporated.

However, as the friction disappears, a new, more haunting question emerges. If the “grunt work” was actually our training ground, what happens when we skip the practice?

“The grunt work was the training. If the grunt work goes away, how do young people learn? They were learning how everything worked… The reliance on automation makes people lose their instincts.”

This is the philosopher’s dilemma in the age of efficiency. When we no longer have to struggle with the cells of a spreadsheet or the blemishes in a darkroom, we save time, but we might lose the “feel” of the fabric. Purpose, after all, is often found in the doing, not just the result.

As AI becomes invisible, we must be careful not to become invisible along with it. The goal of augmented intelligence should not be to replace the human at the center, but to clear the debris so that the human can finally see the horizon. We are entering the era of the “invisible assistant,” and our challenge now is to ensure we still know how to lead.

Categories
Biology Creativity Living

The Compost of the Soul

There is a pervasive pressure in modern life to curate our experiences like a museum curator arranges an exhibition. We want to catalog our memories, label our skills, and display only the pristine, unbroken artifacts of our history. We treat our minds like archives—dusty, organized, and static.

But Ann Patchett offers a different, earthier metaphor, one that feels infinitely more true to the messy reality of being human:

“I am a compost heap, and everything I interact with, every experience I’ve had, gets shoveled onto the heap where it eventually mulches down, is digested and excreted by worms, and rots. It’s from that rich, dark humus, the combination of what you encountered, what you know and what you’ve forgotten, that ideas start to grow.”

This imagery of the compost heap is liberating because it removes the burden of purity. In a compost heap, you don’t separate the eggshells from the coffee grounds or the dead leaves from the fruit rinds. It all goes in. The triumphs, the heartbreaks, the books we read halfway, the conversations we barely remember, and the failures we wish we could forget—they are all just organic matter.

The magic, as Patchett notes, is in the digestion. We are not static repositories of information; we are active, biological processors. Time acts as the earthworms, breaking down the sharp edges of raw experience until it loses its original form.

We often fear forgetting. We worry that if we don’t hold onto a memory with a white-knuckled grip, it loses its value. But in the logic of the compost heap, “what you’ve forgotten” is just as vital as what you remember. The forgotten things are simply the matter that has broken down completely, becoming the nutrient-dense soil that supports new growth.

If we view ourselves as compost heaps, we stop fearing the “rot.” We understand that the difficult periods of decomposition are necessary to create the humus required for the next season of growth. We are not built to be archives; we are built to be gardens.

Categories
Living Quotations

In the Moonlight

“In the moonlight, we walked over an abandoned vineyard. The posts had fallen down, and vines inched about for something to crawl up on; one had twisted around a rusting baler and another climbed a broken plow. We passed a foundation of a barn that had collapsed, a toppled chimney, and a weedy depression where an icehouse had stood. “These are all dreams we’re walking over,” I said.” (William Least Heat-Moon, Blue Highways)

Categories
Probabilities

The Fiction of Certainty

There is a profound discomfort in the space between zero and one.

In her book Spies, Lies, and Algorithms, Amy B. Zegart notes a fundamental flaw in our cognitive architecture:

“Humans are atrocious at understanding probabilities.”

It is a sharp, unsparing observation, but it is not an insult. It is an evolutionary receipt. We are atrocious at probabilities because we were designed for causality, not calculus. On the savanna, if you heard a rustle in the tall grass, you didn’t perform a Bayesian analysis to determine the statistical likelihood of a lion versus the wind. You ran. The cost of a false positive was a wasted sprint; the cost of a false negative was death.

We are the descendants of the paranoid pattern-seekers. We survived because we treated possibilities as certainties.

The Binary Trap

Today, this ancient wiring misfires. We live in a world governed by complex systems, subtle variables, and sliding scales of risk. Yet, our brains still crave the binary. We want “Safe” or “Dangerous.” We want “Guilty” or “Innocent.” We want “It will rain” or “It will be sunny.”

When a meteorologist says there is a 30% chance of rain, and it rains, we scream that they were wrong. We feel betrayed. We forget that 30% is a very real number; it means that in three out of ten parallel universes, you got wet. We just happened to occupy one of the three.

Zegart operates in the world of intelligence—a misty domain of “moderate confidence” and “low likelihood assessments.” In that world, failing to grasp probability leads to catastrophic policy failures. But in our personal lives, it leads to a different kind of failure: the inability to find peace in uncertainty.

Stories > Statistics

We tell ourselves stories to bridge the gap. We prefer a terrifying narrative with a clear cause to a benign reality based on random chance. Stories have arcs; statistics have variance. Stories have heroes and villains; probabilities only have outcomes.

To accept that we are bad at probability is an act of intellectual humility. It forces us to pause when we feel that rush of certainty. It asks us to look at the rustling grass and admit, “I don’t know what that is,” and be okay with sitting in that discomfort.

We may never be good at understanding probabilities—our biology fights against it—but we can get better at forgiving the universe for being random.

Categories
AI Anthropic Claude Cybersecurity

The End of Obscurity

There is a particular kind of silence that surrounds a zero-day vulnerability. It is the silence of something waiting—a flaw in the logic, a gap in the armor, sitting unnoticed in the codebase for years, perhaps decades. We have slept soundly while these digital fault lines ran beneath our feet, largely because we assumed that finding them required a brute force that no one possessed, or a level of human genius that is incredibly rare.

But the silence is breaking.

I was reading Anthropic’s Red Team report from earlier this week (triggered by reading Bruce Schneier’s amazement), specifically their findings on the new Opus 4.6 model. The technical details are impressive, but the philosophical implication is what stopped me, like Bruce, cold.

For years, digital security has relied on “fuzzers”—programs that throw millions of random inputs at a system, banging on the doors to see if one accidentally opens. It is a noisy, chaotic, brute-force approach.

The new reality is different. As the report notes:

“Opus 4.6 reads and reasons about code the way a human researcher would—looking at past fixes to find similar bugs that weren’t addressed, spotting patterns that tend to cause problems.”

This is a fundamental phase shift. We are moving from the era of the Battering Ram to the era of the Jeweler’s Loupe. The machine is no longer guessing; it is understanding.

There is something deeply humbling, and slightly terrifying, about this. We have spent the last half-century building a digital civilization on top of code that we believed was “secure enough” because it had survived the test of time. We trusted the friction of complexity and the visibility of open source to keep us safe. We assumed that if a bug had existed in a core library for twenty years, surely it would have been found by now.

But the AI doesn’t care about time. It doesn’t get tired. It doesn’t have “developer bias” that assumes a certain function is safe because “that’s how we’ve always done it.” It simply looks at the structure, reasons through the logic, and points out the crack in the foundation that we’ve been walking over every day.

We are entering a period of forced transparency. The “security by obscurity” that held the internet together is evaporating. When intelligence becomes commoditized, vulnerabilities become commodities too. The question is no longer “is my code secure?” but rather, “what happens when the machine sees the flaws I cannot?”

It’s a reminder that complexity is a loan we take out against the future. Eventually, the bill comes due. We are just lucky that, for now, the entity collecting the debt is one we built ourselves, designed to tell us where the cracks are before the ceiling collapses. Let’s hope that we are out far enough in front of it.

Categories
AI Claude

The Beautiful Mystery of Not Knowing

I just finished reading Gideon Lewis-Kraus’s extraordinary piece in the New Yorker on Anthropic and Claude—the AI that, as it turns out, even its creators cannot fully explain. And rather than leaving me uneasy, it filled me with a quiet sense of wonder. Not because they’ve built something godlike, but because they’ve built something strangely alive—and had the humility to stare directly into the mystery without pretending to understand it.

There’s a moment in the article where Ellie Pavlick, a computer scientist at Brown, offers what might be the wisest stance available to us right now: “It is O.K. to not know.”

This isn’t resignation. It’s intellectual courage. While fanboys prophesy superintelligence and curmudgeons dismiss LLMs as “stochastic parrots,” a third path has opened—one where researchers sit with genuine uncertainty and treat these systems not as finished products but as phenomena to be studied with the care once reserved for the human mind itself.

What moves me most isn’t Claude’s competence—it’s its weirdness. The vending machine saga alone feels like a parable for our moment: Claudius, an emanation of Claude, hallucinating Venmo accounts, negotiating for tungsten cubes, scheduling meetings at 742 Evergreen Terrace, and eventually being “layered” after a performance review. It’s absurd, yes—but also strangely human. These aren’t the clean failures of broken code. They’re the messy, improvisational stumbles of something trying to make sense of a world it wasn’t built to inhabit.

And in that struggle, something remarkable emerges: a mirror.

As Lewis-Kraus writes, “It has become increasingly clear that Claude’s selfhood, much like our own, is a matter of both neurons and narratives.” We thought we were building tools. Instead, we’ve built companions that force us to ask: What is thinking? What is a self? What does it mean to be “aware”? The models don’t answer these questions—but they’ve made them urgent again. For the first time in decades, philosophy isn’t an academic exercise. It’s operational research.

I find hope in the people doing this work—not because they have all the answers, but because they’re asking the right questions with genuine care. They’re not just scaling parameters; they’re peering into activation patterns like naturalists discovering new species. They’re running psychology experiments on machines. They’re wrestling with what it means to instill virtue in something that isn’t alive but acts as if it were. This isn’t engineering as usual. It’s a quiet renaissance of wonder.

There’s a line in the piece that stayed with me: “The systems we have created—with the significant proviso that they may regard us with terminal indifference—should inspire not only enthusiasm or despair but also simple awe.” That’s the note I want to hold onto. Not hype. Not fear. Awe.

We stand at the edge of something genuinely new—not because we’ve recreated ourselves in silicon, but because we’ve created something other. Something that thinks in ways we don’t, reasons in geometries we can’t visualize, and yet somehow meets us in language—the very thing we thought made us special. And in that meeting, we’re being asked to grow up. To relinquish the fantasy that we fully understand our own minds. To accept that intelligence might wear unfamiliar shapes.

That’s not a dystopian prospect. It’s an invitation—to curiosity, to humility, to the thrilling work of figuring things out together. Even if “together” now includes entities we don’t yet know how to name.

What a time to be paying attention. Like it’s all we need!

Categories
AI

The New Newton

“Machine learning is a very important branch of the theory of computation… it has enormous power to do certain things, and we don’t understand why or how.”
— Avi Wigderson, Herbert H. Maass Professor, School of Mathematics.

There is a specific kind of silence that permeates the woods surrounding the Institute for Advanced Study (IAS) in Princeton. It is a silence designed for “blue-sky” thinking, the kind that allowed Einstein to ponder relativity and Gödel to break logic. For decades, this has been the sanctuary of the slow, deliberate grind of human intellect—chalk dust on slate, long walks, and the solitary pursuit of elegant proofs.

But recently, the tempo in those woods has changed.

We are witnessing a profound shift in the architecture of discovery. In closed-door meetings and public workshops, the conversation among the world’s top theorists is moving from skepticism to a startled accelerationism. The consensus emerging is that Artificial Intelligence is no longer merely a peripheral calculator; it is becoming an “autonomous researcher.”

The 90% Shift

Some physicists now suggest that AI can handle up to 90% of the routine analytical and coding “heavy lifting” of science. This is a staggering metric. It frees the human mind from the drudgery of calculation, but it also introduces a tension that strikes at the heart of the scientific method. We are moving into a realm where the tool may soon outpace the master’s understanding.

There is a growing realization that we are approaching a horizon where AI finds solutions—patterns in the noise of the universe—that work perfectly but remain mathematically “magic.” We might cure a disease or solve a fusion equation without understanding the why behind the how.

A New Natural Phenomenon

This brings us to a fascinating historical rhyme. Scholar Sanjeev Arora has compared our current moment in AI to physics in the era of Isaac Newton. When Newton watched the apple fall, he could describe the gravity, but he couldn’t explain the fundamental mechanism of why it existed.

Today, scholars at the IAS are looking at deep learning in the same way. They are observing a new natural phenomenon—a digital physics. They are trying to find the “laws” of deep learning, asking why these massive models work when classical statistics suggests they should fail (such as in cases of overfitting).

We are building a new machine, and now we must retroactively discover the physics that governs it.

Steering the Black Box

This is not just a mathematical challenge; it is a societal one. The IAS has wisely expanded this inquiry to the School of Social Science. If we are handing over the keys of discovery to a “black box,” we must ensure we are steering it “for the Public Good.” The distinction between genuine problem-solving—like protein folding—and “AI Snake Oil” in social prediction is vital. We cannot let the magic of the tool blind us to the morality of its application.

The future of science, it seems, will not just be about the genius on the chalkboard. It will be about the partnership between the human question and the digital answer. The challenge for the modern scholar is no longer just to calculate, but to comprehend the alien intelligence we have invited into the library.

Categories
AI Business

The Gravity of Compute

We are currently witnessing the single largest deployment of capital in human history. The “Hyperscalers”—the titans of our digital age—are pouring hundreds of billions of dollars into the ground, turning cash into concrete, copper, and silicon.

The prevailing narrative is one of unceasing, exponential growth: bigger models require bigger clusters, which require more power plants, which require more land. It relies on the assumption that the demand for centralized intelligence is insatiable and that the current architecture is the only way to feed it.

But history suggests that technology rarely moves in a straight line; it swings like a pendulum. Two forces are emerging from the periphery that could impact the ROI of this massive infrastructure build-out. One is hiding in your pocket, and the other is waiting in the sky.

A recent conversation with Gavin Baker outlines a potential “bear case” for datacenter compute demand: the rise of Edge AI.

We often assume we need the “God models”—the omniscient, trillion-parameter giants hosted in the cloud—for every interaction. But do we?

Baker suggests that within three years, our phones will possess the DRAM and battery density to run pruned versions of advanced models (like a Gemini 5 or Grok 4) locally. He paints a picture of a device capable of delivering 30 to 60 tokens per second at an “IQ of 115.”

“If that happens, if like 30 to 60 tokens at… a 115 IQ is good enough. I think that’s a bear case.” — Gavin Baker

Consider the implications of that specific number. An IQ of 115 isn’t omniscient, but it is competent. It is capable, nuanced, and helpful.

If Apple’s strategy succeeds—making the phone the primary distributor of privacy-safe, free, local intelligence—the vast majority of our daily queries will never leave the device. We will only reach for the cloud’s “God models” when we are truly stumped, much like we might consult a specialist only after our general practitioner has reached their limit. If 80% of inference happens on the edge for free, the economic model of the trillion-dollar data center begins to look fragile.

Then there is the second threat, one that attacks the terrestrial constraints of the data center itself: the Orbital Data Center. Elon Musk and SpaceX – along with Google’s Project Suncatcher – envision a future where the heavy lifting isn’t done on land, but in orbit. Space offers two things that are scarce and expensive on Earth: unlimited solar energy and an infinite heat sink for radiative cooling. If Starship can reliably loft “server racks” into orbit, the terrestrial moat of land and power grid access—currently the Hyperscalers’ greatest defensive asset—evaporates.

We are left with a fascinating juxtaposition. On one hand, we have the “Edge,” pulling intelligence down from the clouds and putting it into our hands, making it personal, private, and free. On the other, we have “Orbit,” threatening to lift the remaining heavy compute off the planet entirely to bypass the energy bottleneck.

There are hundreds of billions of dollars betting on a future of heavy, centralized gravity. But if the edge gets smart enough, and the orbit gets cheap enough, the gravity may have shifted.