Categories
AI Work

The Dealers of Intelligence

Thereโ€™s a scene early in John Kenneth Galbraithโ€™s The Affluent Society where he describes Americans of an earlier era regarding industrial output with something close to reverence โ€” the sheer productive capacity of the nation seemed almost miraculous, a force that could reshape civilization. Within a generation, of course, that same output had become background noise. Factories hummed, goods appeared, and nobody paused to marvel.

The miraculous had become mundane, and the mundane had become infrastructure.

I found myself thinking about that arc recently while listening to Sam Lessin on the More or Less podcast.

Lessin made an observation that I havenโ€™t been able to shake: we probably arenโ€™t heading toward a single, triumphant AGI monopoly โ€” some god-machine that one fortunate company builds first and then rents to the rest of us in perpetuity.

Instead, Lessin suggested, we are barreling toward something far more ordinary, and in its ordinariness, far more interesting.

โ€œThere will be lots of โ€˜dealers of intelligenceโ€™. No one company will corner the market, no one big winner of AGI.โ€

Dealers of intelligence. I keep turning that phrase over. Where do we end up? No rapture, no singularity, no chosen company ascending to the throne of cognition. Just suppliers, distribution channels, price competition โ€” the unglamorous mechanics of any maturing market.

And historically, thatโ€™s exactly how this tends to go.

Salt was once precious enough to pay soldiers with. Spices rewrote the map of the world. Steel, oil, and computing power each arrived wrapped in mystique and guarded behind scarcity before the inevitable happened: extraction improved, distribution scaled, and the miracle became a utility. Nobody thinks about the engineering marvel of the electrical grid when they flip a light switch. They just expect the light to come on.

If Lessin is right โ€” and the competitive landscape of the last two years does little to argue against him โ€” intelligence will follow the same curve. Not a single oracle, but a market. Cognitive utilities. Price-per-token negotiations. The same forces that commoditized bandwidth will commoditize reasoning, and weโ€™ll argue about our AI subscription tiers the way we currently argue about our data plans.

Which forces the interesting question: when genius is cheap, what exactly becomes valuable?

The professional moats of the last century were largely built on the ability to process specialized information and output reliable answers.

The doctor, the lawyer, the financial analyst, the programmer โ€” each occupied a protected position because access to their domain of reasoning was genuinely scarce.

If I can buy a substantial fraction of that reasoning from a commodity supplier for fractions of a cent, the premium on raw cognitive horsepower doesnโ€™t just shrink. It collapses.

Whatโ€™s left, I think, is the un-commoditizable. Empathy. Physical presence. Judgment under conditions of genuine uncertainty and consequence. And above all โ€” taste.

Taste is the thing that has always resisted systematization, because taste isnโ€™t rational in any clean sense. Itโ€™s the residue of lived experience, of specific childhoods and particular failures and the accumulated weight of caring about things over time.

An algorithm can produce a structurally flawless piece of music; it takes a human to decide whether it matters, and why, and to whom.

That act of curation โ€” of choosing what deserves to exist and what doesnโ€™t โ€” is going to become more consequential, not less, as the supply of technically competent output explodes.

Thereโ€™s something almost liberating about this, if you let yourself sit with it.

A world of commoditized intelligence is, paradoxically, a profoundly human one. It removes the burden of raw computation from the center of what we do and pushes us toward the edges โ€” toward the questions only we can ask, the connections only we can feel, the decisions only we can be held accountable for.

The dealers of intelligence will handle the materials. Weโ€™ll still have to decide what to build. Architects.


Questions to Consider

  1. If intelligence becomes a commodity like electricity or bandwidth, which industries or professions will be slowest to feel that pressure โ€” and why?
  2. Lessin frames this as a market with many suppliers rather than a winner-take-all race. Does the competitive landscape today support that view, or does it still look like a sprint toward consolidation?
  3. What does โ€œtasteโ€ actually mean when the person exercising it is doing so with AI-augmented perception and judgment? Is it still the same thing?
  4. Who gets to haggle with the dealers? If cognitive utilities are cheap in aggregate but not universally accessible, does commoditization risk deepening inequality rather than democratizing thought?
  5. If the value of answering questions falls and the value of asking them rises, what does education need to look like โ€” and how far is it from what it looks like now?
Categories
AI

The Jagged Mind

There is a peculiar kind of genius that has always made us uneasy โ€” the savant who can calculate the day of the week for any date in history but cannot tie his own shoes. We admire the capability. We are troubled by the gap.

Demis Hassabis, speaking at this weekโ€™s India AI Impact Summit in Delhi, gave that unease a name. He called todayโ€™s most powerful AI systems โ€œjagged intelligences.โ€

It is a phrase worth sitting with.

A jagged intelligence can win a gold medal at the International Mathematics Olympiad โ€” solving problems that would humble most PhD mathematicians โ€” and then, in the very next breath, stumble on elementary arithmetic if the question is phrased in an unfamiliar way.

The peaks are extraordinary. The valleys are bewildering. And crucially, you never quite know which terrain youโ€™re standing on.

Hassabis identified three specific gaps between where we are and what he called โ€œa kind of general intelligence.โ€

The first is continual learning โ€” todayโ€™s models are trained, then frozen. They are, in a sense, educated and then released into a world they can no longer learn from.

The second is long-term planning. Current systems can reason tactically, but they lack the capacity to hold a coherent thread of intention across months or years the way a human architect, scientist, or entrepreneur does.

The third โ€” and perhaps the most philosophically interesting โ€” is that jaggedness itself: the wild inconsistency that makes todayโ€™s AI feel more like a force of nature than a reliable mind.

โ€œA true general intelligence system shouldnโ€™t have that kind of jaggedness.โ€

What strikes me about Hassabisโ€™s framing is how it reorients the conversation.

We have spent years debating whether AI is โ€œintelligent.โ€ His point is more subtle: intelligence without consistency is not yet wisdom. A system that is brilliant and brittle in equal measure is something genuinely new in the world โ€” not human, not the robots of science fiction, but a third thing we donโ€™t yet have good language for.

The road from jagged to coherent is, I suspect, the central engineering and philosophical challenge of the next decade.

Continual learning means systems that grow with us. Long-term planning means systems that can be trusted with consequential goals. Consistency means systems whose judgment we can actually rely on.

Until then, we are working with something that resembles a prodigy โ€” dazzling, occasionally humbling, and not yet quite whole.

Questions to Consider

  1. The Consistency Problem: If you knew an AI system could solve a problem brilliantly 90% of the time but fail unpredictably the other 10%, how would that change the decisions youโ€™d trust it to make?
  2. Frozen in Time: What does it mean that the systems we rely on most are, at their core, educated in the past and unable to learn from the present? What human analog does that bring to mind?
  3. Jagged vs. General: Hassabis draws a line between โ€œjagged intelligenceโ€ and โ€œgeneral intelligence.โ€ Do you think general intelligence is the right destination โ€” or is there value in systems that are deeply specialized, even if inconsistent?
  4. The Savant Question: Weโ€™ve always had a complicated relationship with uneven genius in humans. Does the โ€œjagged AIโ€ problem feel categorically different to you, or just a new version of an old puzzle?
Categories
AI Work

Surviving Our Own Success: The Existential Shift of the AI Era

We are standing on the precipice of a profound shiftโ€”not just in how we work, but in what work actually means to us. Sam Harris talks about it here. Itโ€™s disturbing in many ways!

Lately, the cultural conversation has been thick with a specific kind of anxiety. The rising tide of concern around artificial intelligence and job displacement isn’t merely an economic panic; it is an existential one. For a long time, we comforted ourselves with the idea that the timeline for artificial general intelligence (AGI) was measured in decades. It was a problem for our children, or perhaps our grandchildren, to solve. But as recent discussions among tech leaders highlight, that timeline is compressing rapidly. We are now hearing serious projections that within the next 12 to 18 months, “professional-grade AGI” could automate the vast majority of white-collar, cognitive tasks.

“For centuries, human beings have defined themselves by the friction of their labor.”

We introduce ourselves with our job titles at dinner parties. We measure our worth by our productivity, our outputs, and the unique skills weโ€™ve honed over decades. We willingly incur hundreds of thousands of dollars in student debt to secure a spot on the bottom rung of the corporate ladder, believing that with enough effort, we can climb it.

But suddenly, we are faced with the reality that the ladder isn’t just missing a few rungs; it is evaporating entirely.

Here lies one of the great ironies of our modern age: we always assumed the robots would come for the physical labor first. We pictured automated plumbers, robotic janitors, and android mechanics. Instead, they are coming for the thinkers. They are coming for the lawyers drafting contracts, the accountants crunching tax codes, the marketers writing copy, and the software engineers writing the very code that powers them. The high-status cognitive work we prized so deeplyโ€”the work we built our entire educational infrastructure aroundโ€”turns out to be the easiest to replicate in silicon.

When a machine arrives that can mimic, accelerate, or entirely replace that friction, the foundation of our identity begins to tremble. We are moving from a world where we are the engines of creation to a world where we are merely the editors of it. A single person might soon do the work of a thousand, spinning up autonomous AI agents to execute entire business strategies, architect software, and manage logistics in a single afternoon.

Yet, as terrifying as this sounds, the most startling realization isn’t a dystopian fear of rogue machines or cyber terrorism. Itโ€™s that this massive economic disruption is actually what success looks like. This isn’t the failure mode of AI; this is the technology working exactly as intended, ushering in an era of unprecedented productivity and, theoretically, boundless abundance.

The emergency we face is that our social and economic systems are entirely unprepared for a reality where human labor is optional. We are witnessing what some have described as a “Fall of Saigon” moment in the tech and corporate worldsโ€”a frantic scramble where a few founders and final hires are grasping at the helicopter skids of stratospheric wealth before the need for human employees vanishes. If we are truly approaching a future where human labor is obsolete, how do we share the wealth generated by these ubiquitous systems?

Perhaps there is a quiet grace hidden within this disruption. If AI takes over the mechanical, the repetitive, and the cognitive synthesis, it leaves us with the deeply, undeniably human. It forces us to lean into the things an algorithm cannot compute: empathy, lived experience, moral judgment, and the beautiful, messy reality of physical presence.

The future of work might not be about competing with machines at all. It forces us to confront the terrifying, beautiful question: Who are we when we don’t have to work? It is an invitation to finally separate our human worth from our economic output, and to redesign a society that shares the wealth of our own invention. We are entering an era of abundance. The only question is whether we have the collective imagination to survive our own success.

Questions to Ponder

  1. If your job title was erased tomorrow, how would you define your value to the world?
  2. How do we build a society that rewards human existence rather than just economic output?
  3. What is one deeply human skill or passion you would cultivate if you no longer had to work for a living?
Categories
AI Work

The Digital Beast of Burden

A friend of mine recently cut through the noise of the current AI discourse with a comment that felt surprisingly grounding. We were talking about the breathless predictions of AGIโ€”superintelligence, sentient machines, the technological singularityโ€”when he shrugged and said, “I don’t need any of that. I just want AI to do the donkey work.”

He wasn’t asking for a god in the machine; he was asking for a better tractor. He didn’t want a synthetic philosopher to debate the meaning of life; he wanted the next evolution of “Claude Cowork”โ€”a reliable, tireless entity to handle the drudgery so he could get back to the actual business of thinking.

There is something profound in that phrase: donkey work. It evokes the image of the beast of burdenโ€”the creature that carries the heavy packs up the mountain so the traveler can focus on the path and the view. For thousands of years, humans have sought tools to offload physical exertion. We domesticated animals, we built water wheels, we invented the steam engine. We outsourced the calorie-burning, back-breaking labor to preserve our bodies.

“The ‘donkey work’ of the information age isn’t hauling stone; it is the cognitive load of bureaucracy, formatting, sorting, scheduling, and synthesizing endless streams of data.”

Now, we are looking to preserve our minds.

The friction that exists between having an idea and executing it is often composed entirely of this “donkey work.” When my friend says he wants AI for this, he isn’t being lazy. He is expressing a desire to reclaim his cognitive bandwidth.

There is a fear that if we hand over these tasks, we become less capable. But I suspect the opposite is true. If you are no longer exhausted by the logistics of your work, you are free to be consumed by the meaning of it.

We often talk about AI as if itโ€™s destined to replace the artist or the architect. But the most valuable version of this technology might just be the humble assistantโ€”the digital mule that quietly processes the mundane in the background. Itโ€™s the difference between a tool that tries to be you, and a tool that helps you be you.

We don’t need AGI to solve the human condition. We just need the “donkey work” handled so we have the time and energy to experience it.

What do you think?

  1. Is there a danger that in handing over the “donkey work,” we accidentally hand over the friction required to build mastery?
  2. If your daily cognitive load dropped by 50% tomorrow, would you actually use that space for “higher thinking,” or would you just fill it with more noise?
  3. Where exactly is the line between “drudgery” and the “process”โ€”and are we risking erasing the latter to solve the former?
Categories
AI Business Work

The Curator of Intent

I have always found a certain comfort in the “clatter” of a digital workday. Itโ€™s that specific, rhythmic hum of a mind in motionโ€”the clicking of a mechanical keyboard, the invisible friction of parsing a difficult paragraph or balancing a complex budget. For years, weโ€™ve treated this white-collar grind as our intellectual sanctuary.

But Mustafa Suleyman, now steering Microsoft AI, recently laid out a timeline that suggests the sanctuary walls are evaporating.

From an article in the Financial Times:

โ€œWhite-collar work, where youโ€™re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person โ€” most of those tasks will be fully automated by an AI within the next 12 to 18 months,โ€ Suleyman said.

This isn’t just about efficiency; itโ€™s about a fundamental shift in the “professional grade.” We are entering the era of the autonomous agentโ€”AI that doesn’t just wait for a prompt but “coordinates within workflows,” learns from its environment, and acts. Just ask any programmer that you know how AI is impacted their daily grind.

If Suleyman is correct, the “knowledge worker” is about to undergo a forced evolution. When the “doing” is handled by an agent that can learn and improve over time, what remains for the human? Will the models actually be able to learn from each of us in a personalized way – like an intern learns from her mentor?

โ€œCreating a new model is going to be like creating a podcast or writing a blog,โ€ he said. โ€œIt is going to be possible to design an AI that suits your requirements for every institutional organisation and person on the planet.โ€

It seems like our primary job description shifts from “Expert,” but “Curator of Intent.” We aren’t the ones finding the answers anymore; we are just the ones responsible for asking the right questions.

The next 18 months won’t just be a test of our technology, but a test of our egos. We have to learn to find our value not in the work we produce, but in the vision we hold and the questions we ask. We are shedding the “task” to save the “craft.” I just hope we remember the difference.


As we move toward this curated future, Iโ€™m left with a few questions I canโ€™t quite shake. Iโ€™d love to hear your thoughts:

  1. The Wisdom Gap: Can you truly be a “Curator of Intent” without having ever been a “Doer of Tasks”? If we skip the apprenticeship of the mundane, where does our intuition come from?
  2. The Metric of Value: If output becomes “free,” how should we measure a human’s value in a professional setting?
  3. The Line in the Sand: Is there a part of your workflow you would refuse to automate, even if an AI could do it better?