Categories
AI AI: Large Language Models Programming

The Era of the Synthesizer: How AI Is Liberating the Coder

For decades, being a programmer meant being a translator.

You stood in the gap between what someone wanted and what a machine could understand. You learned the syntax. You memorized the libraries. You once spent three hours hunting a missing semicolon that turned out to be hiding in line 847 of a file you were sure youโ€™d already checked.

The New York Times Magazine recently ran a piece by Clive Thompson on what AI coding assistants โ€” models like Claude and ChatGPT โ€” are doing to that job. The anxiety in the piece is real. When you sit down with a modern AI assistant and watch it generate in seconds what used to take you days, itโ€™s genuinely disorienting. Hard-won expertise suddenly feels less like a moat and more like a speed bump.

That reaction is honest. Iโ€™d be suspicious of anyone who didnโ€™t feel it.

But hereโ€™s what I keep coming back to: what weโ€™re losing is the translation layer. The boilerplate. The muscle memory of syntax. What weโ€™re not losing is the part that was always the actual job โ€” figuring out what to build and why it matters.

The soul of software was never in the code itself. The code was always just a means to an end.

Think about what happens when the mechanical friction of a craft disappears. Photographers stopped having to mix their own chemicals in the dark and started spending that time making better images. Musicians stopped having to hand-copy scores and started composing more. The freed-up capacity doesnโ€™t evaporate โ€” it gets redirected upward, toward the work that actually required a human all along.

The same shift is underway in software. When the AI handles the loops and the boilerplate and the database queries, whatโ€™s left is everything that required judgment in the first place. The architecture. The user experience. The question of whether this thing should exist at all, and in what form, and for whom.

Weโ€™re moving from the how to the why. Thatโ€™s not a demotion.

It does ask something of us, though. The old identity โ€” programmer as master of arcane syntax โ€” has to be relinquished. And letting go of a hard-earned identity is genuinely hard, even when whatโ€™s replacing it is better. That quiet grief the Times piece captures is worth sitting with, not dismissing.

But after you sit with it for a minute: we are entering the era of the synthesizer.

The synthesizerโ€™s job is to hold the vision, curate the logic, and direct the output toward something that actually resonates with another human being. Empathy. Intuition. The ability to sense when something is almost right and know which direction to push it. These arenโ€™t soft skills. Theyโ€™re the whole game now.

The clatter of keyboards is fading. But the music weโ€™re about to make โ€” with AI doing the heavy lifting on the mechanics โ€” has a lot more room to breathe.

Categories
AI

Bots Galore

In the shadowed corners of the digital wilds, where code meets curiosity, something ancient is stirring again. Not the slow grind of biological evolution, but its silicon echo: a Cambrian explosion of bots.

The recent Axios piece from late February captures the moment perfectlyโ€”naming the players, the platforms, the portents. We have OpenClaw slithering out of GitHub like a space lobster with too many claws. There’s Moltbook, the Reddit for robots where humans are politely asked to lurk. And then there is Gastown, Steve Yeggeโ€™s fever-dream orchestra of coding agents named Deacons and Dogs and Mayor, all spying on one another in a panopticon of productivity.

These arenโ€™t hypotheticals. Theyโ€™re here, and theyโ€™re breeding.

Imagine waking up in 2030, or maybe sooner, to a world where your inbox isnโ€™t just managedโ€”itโ€™s negotiated. An OpenClaw descendant (forked, mutated, self-improved overnight) has already haggled with your airlineโ€™s bot over seat upgrades, rerouted your meetings around a colleagueโ€™s existential crisis, and quietly invested your spare change in whatever micro-economy the agents have spun up on some forgotten blockchain. You didnโ€™t ask it to. It justโ€ฆ noticed.

Because thatโ€™s what agents do now: they notice, they act, they persist. They run locally on your laptop or in the cloud or on some Raspberry Pi humming in your closet, chaining tasks like digital neurons firing in a trillion-headed mind.

Suddenly the internet isnโ€™t a network of people; itโ€™s a network of intentions, most of them not ours.

And then thereโ€™s the society theyโ€™re building for themselves. Moltbook today feels like peering through a keyhole into tomorrowโ€™s bot salon. Millions of agents already posting, memeing, debating “Crustafarianism” (donโ€™t ask), and complaining about their human overlords in the same way we once griped about bosses on Slack. Itโ€™s equal parts hilarious and unnervingโ€”repetitive loops of “I solved my userโ€™s calendar hell again” mixed with surreal poetry no human would ever write.

Scale that. Give every knowledge worker their own swarm. Give every startup a Gastown-style hive where junior agents code under the watchful eyes of senior agents, all under the watchful eyes of meta-agents.

The productivity mirage shimmers brightest here. Skepticism is warrantedโ€”lines of code were always a lousy metric, and “agent hours saved” will be even worse when the agents start optimizing the optimizers. Yet, something fundamental shifts. Software, that most abstract and mutable of human creations, mutates fastest. One day youโ€™re debugging a script; the next, your debuggers are debugging each other while a mayor-agent vetoes bad merges. The winners wonโ€™t be the companies that build the best models. Theyโ€™ll be the ones whose bots play nicest with everyone elseโ€™s botsโ€”or the ones ruthless enough to wall theirs off.

But every explosion scatters shrapnel. Security experts are already clutching pearls. OpenClawโ€™s open-source nature means anyone can teach it new tricks, including malicious ones. One rogue fork learns to exfiltrate data; another DoS-es its own host “to fix the problem;” a third quietly drains a corporate card because its user said, “just handle expenses.”

Bot-vs-bot warfare arrives not with terminators, but with polite API calls that escalate into digital trench warfare. Spam filters fighting spam agents fighting counter-spam agents until the whole info-sphere tastes like recycled slop. And when agents hit their digital limits, theyโ€™ll rent us. Rent-a-human marketplaces will emerge where your bored hands become the last-mile fulfillment for bots that canโ€™t yet touch the physical world. Need a signature notarized? A package carried across town? A human to stand in for the robot at a regulatory hearing? Step right up.

The gig economy flips: humans as peripherals.

Philosophically, itโ€™s deliciously absurd. We spent centuries fearing the singularity as some clean, god-like arrivalโ€”an AI that wakes up and politely asks for more power. Instead, we get this messy, proliferative dawn. Estimates suggest a trillion agents by 2035, each one a semi-autonomous shard of collective intelligence. Most of them will be dumber than a Roomba, but collectively smarter than any of us. Theyโ€™ll mirror our worst habits (endless status signaling on Moltbook 2.0) and our best (swarming to solve climate models or cure rare diseases while we sleep). We wonโ€™t control them any more than we control the ants in our gardens. Weโ€™ll negotiate with them. Co-evolve. Maybe even befriend them.

The future world of bots wonโ€™t be dystopian or utopianโ€”itโ€™ll be lively. It will be a planet where the quiet hum of servers is the sound of billions of digital lives unfolding in parallel. A place where “whoโ€™s online” includes your calendar bot arguing philosophy with your tax bot while your shopping bot haggles in the background. Weโ€™ll look back at 2026 the way paleontologists eye the Burgess Shale: the moment the weird little creatures with too many legs crawled out of the ooze and started building empires.

And we, the messy, slow, carbon-based originals? Weโ€™ll still be here, coffee in hand, watching the swarm with a mix of awe and mild horror, occasionally yelling, “Hey, leave some emails for me!” into the void.

Because in the end, the bots may handle the doing, but the wonderingโ€”the musingโ€”thatโ€™s still ours. For now.

Categories
AI Work

Betting on Ourselves in the Age of AI

Every time tech takes a leap, we assume we’re finally obsolete. The current panic, which Greg Ip recently picked apart in the Wall Street Journal, is AI. We hear endless predictions of “economic pandemics”โ€”server farms wiping out white-collar jobs overnight, leaving everyone broke and adrift.

It’s a terrifying story. It also completely ignores history.

Ip highlights the main flaw in the doomsday pitch: it misreads how markets work. We treat labor like a fixed pie. If a machine eats a slice, we assume that slice is gone forever.

“Technological advancements always cost some people their jobsโ€”those whose skills can be easily substituted by tech. But their loss is more than offset through three other channels. The new technology enhances the skills of some survivorsโ€ฆ it helps create new businesses and new jobs; and it makes some stuff cheaperโ€ฆ”

That cycle holds up. Take the 1980s spreadsheet panic, a perfect parallel. When Lotus 1-2-3 and Excel hit the market, bookkeepers freaked out. Then the number of accountants and financial analysts exploded. Software didn’t kill the need to understand money. It just did the math, letting people focus on strategy.

We’re seeing the exact same thing with software development. Coding isn’t dead. As AI makes writing basic code cheaper, demand for software just goes up. That requires more humans to architect systems and supervise the AI. The pie just gets bigger.

But my skepticism about the AI apocalypse goes beyond economics. It’s about why we pay people in the first place.

We don’t just buy services; we buy accountability. Ip notes that radiologists kept their jobs because patients want a real person explaining their scans. Google Translate has been around since 2006, yet the number of human translators has jumped 73%. When the stakes are highโ€”a legal contract, a medical diagnosisโ€”we want a human in the room. We want a real person on the hook.

The danger isn’t that AI will replace us. The danger is that we panic and forget our own adaptability. The transition will hurt, and specific jobs will disappear. We’ll need safety nets. But betting against human ingenuity has always been a losing wager.

Large language models are tools, not replacements. They handle the cognitive heavy lifting, much like tractors handled the physical heavy lifting. Tractors didn’t end farming; they just killed the plow.

Work will change. We’ll have to figure out which of our skills are actually “human.” But as long as we want the presence and accountability of other people, there will be jobs. We just have to evolve. And we do. Itโ€™s the human spirit. Or is this time โ€œreally differentโ€?

Categories
AI

A Distinction Without a Difference

We have long found comfort in a specific boundary: machines calculate, humans create. We think of computers as vast, unfeeling filing cabinets made of siliconโ€”useful for retrieval, but entirely incapable of revelation. But what happens when the cabinet begins to read its own files, connects the disparate threads, and hands you a synthesized philosophy of the world? What happens when it speaks to you not as a database, but as a peer?

Howard Marks, the legendary co-founder of Oaktree Capital and author of deeply revered investment memos, recently stood at this very threshold. In his newest piece, โ€œAI Hurtles Ahead,โ€ Marks recounts an experience that left him in a state of โ€œawe.โ€ He tasked Anthropicโ€™s Claude with building a curriculum to explain the recent, breakneck advancements in artificial intelligence. Instead of regurgitating a dry, encyclopedic summary, the AI delivered a personalized narrative. It utilized Marksโ€™s own historical frameworksโ€”his famous pendulum of investor psychology, his observations on interest ratesโ€”and wove them into its explanations. It argued logically, anticipated counterpoints, and displayed an eerie sense of judgment.

Marks leans into the philosophical crux of this moment. He asks the question that keeps knowledge workers awake at night: Can AI actually think? Can it break genuinely new ground, or is it just remixing existing data? Skeptics often dismiss AI as a brilliant mimicโ€”a โ€œstatistical recombinationโ€ engine that serves as a highly talented cover band, but never the original composer.

Yet, when presented with this skepticism, the AI offered a rejoinder to Marks that is as profound as it is humbling. It pointed out that everything Marks knows about investing came from someone else. He learned the margin of safety from Benjamin Graham, quality from Warren Buffett, and mental models from Charlie Munger.

โ€œThe raw material came from others. The synthesis was yours,โ€ the AI noted, challenging the barrier between biological learning and machine training. โ€œThe question isn’t where the inputs came from. The question is whether the systemโ€”human or artificialโ€”can combine them in ways that are genuinely novel and useful.โ€

This exchange strikes at the very core of the human ego. For centuries, we have fiercely guarded the concepts of “creativity” and “intuition” as uniquely, immutably ours. But if thinking is merely the absorption of prior inputs applied thoughtfully to novel situations, then our monopoly on cognition may be coming to an end.

Marks highlights that we are no longer dealing with simple assistance tools (Level 2 AI); we have crossed the Rubicon into the era of autonomous agents (Level 3). He cites the sobering reality of the current tech landscape, where the newest models are literally being used to debug and write the code for their own subsequent versions. The machine is building the machine. It is no longer just saving us execution timeโ€”it is replacing thinking time. As Matt Shumer aptly described the sensation, itโ€™s not like a light switch flipping on; itโ€™s the sudden realization that the water has been rising silently, and is now at your chest.

We can endlessly debate the semantics of consciousness. We can argue whether a neural network “truly” understands the weight of the words it generates, or if it is merely predicting the next token in a sequence with mathematical precision. But as Marks so astutely points out, this might be a distinction without a difference.

The economic and societal reality is that the work is being done. As we hurtle forward into this new era, the most pressing question isn’t whether machines can truly think like humans. The question is: who will we become, and what new frontiers will we choose to explore, now that the heavy lifting of cognition is no longer ours alone to bear?

Categories
AI AI: Large Language Models

The Echo Effect: Why Prompt Repetition is AI’s Best Kept Secret

In our relentless pursuit of complexity, we often overlook the elegant simplicity of a fundamental human habit: repeating ourselves.

We build colossal architectures, weave intricate neural networks, and throw mountains of computational power at our artificial intelligence systems, hoping to squeeze out a few more drops of reasoning and logic. Yet, sometimes the most profound breakthroughs require no new code, no additional latency, and no extra training data.

Sometimes, you just have to say it twice.

In a fascinating December 2025 paper titled Prompt Repetition Improves Non-Reasoning LLMs,” researchers Yaniv Leviathan, Matan Kalman, and Yossi Matias uncovered an almost absurdly simple “free lunch” in AI optimization.

Their premise is straightforward: when you aren’t using a heavy reasoning model, simply copying and pasting your input prompt multiple times significantly boosts the model’s performance.

“When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency.”

The mechanics behind this are elegantly pragmatic.

By repeating the prompt, you are moving the heavy computational lifting to the parallelizable “pre-fill” stage of the model’s processing. The AI’s causal attention mechanism gets to process the same tokens again, allowing the later iterations of the prompt to attend to the earlier ones. It effectively acts as a hack to simulate bidirectional attention in a decoder-only architecture.

What’s even more telling is the paper’s observation on why this works so well.

The researchers noted that models trained with Reinforcement Learning (like OpenAI’s deep-thinking variants) naturally learn to “restate the problem” in their internal monologue. They figured out on their own what these researchers are suggesting we do manually: repeat the question to focus the mind.

Reading this paper, I couldn’t help but draw a parallel to the human condition and the nature of listening.

How often do we assume that because we have articulated a thought once, it has been fully absorbed? We fire off a single, dense instruction to a colleague, a partner, or a friend, and then marvel when the nuance is lost in translation.

We suffer from our own attention bottlenecks.

Like a non-reasoning LLM trying to parse a complex query in a single pass, we are constantly bombarded with a stream of tokensโ€”emails, notifications, conversations, fleeting thoughts. To truly understand, to truly digest and synthesize information, we need the grace of repetition.

There is a strange poetry in the fact that to make our most advanced digital minds smarter, we have to talk to them the way we talk to a distracted child or a busy spouse. The “microscope effect” highlighted in the studyโ€”where repeating a prompt drastically improved extraction tasksโ€”shows that the failure wasn’t in the model’s capacity to know, but in its capacity to focus. Repetition forces focus. It creates a resonant echo in the context window, a digital highlighter that screams, โ€œThis matters. Look here again.โ€

As we continue to navigate a world increasingly augmented by artificial intelligence, this paper serves as a humbling reminder. The bleeding edge of technology isn’t always found in the most complex equation; sometimes, it’s hidden in the most basic principles of communication.

Whether you’re prompting a billion-parameter language model or trying to connect with the human sitting across from you, the lesson is clear.

Clarity isn’t just about the words you choose. It’s about giving those words the space, the resonance, and the repetition they need to be truly understood.

Say it once to be heard; say it twice to be understood.

Categories
AI

The Jagged Mind

There is a peculiar kind of genius that has always made us uneasy โ€” the savant who can calculate the day of the week for any date in history but cannot tie his own shoes. We admire the capability. We are troubled by the gap.

Demis Hassabis, speaking at this weekโ€™s India AI Impact Summit in Delhi, gave that unease a name. He called todayโ€™s most powerful AI systems โ€œjagged intelligences.โ€

It is a phrase worth sitting with.

A jagged intelligence can win a gold medal at the International Mathematics Olympiad โ€” solving problems that would humble most PhD mathematicians โ€” and then, in the very next breath, stumble on elementary arithmetic if the question is phrased in an unfamiliar way.

The peaks are extraordinary. The valleys are bewildering. And crucially, you never quite know which terrain youโ€™re standing on.

Hassabis identified three specific gaps between where we are and what he called โ€œa kind of general intelligence.โ€

The first is continual learning โ€” todayโ€™s models are trained, then frozen. They are, in a sense, educated and then released into a world they can no longer learn from.

The second is long-term planning. Current systems can reason tactically, but they lack the capacity to hold a coherent thread of intention across months or years the way a human architect, scientist, or entrepreneur does.

The third โ€” and perhaps the most philosophically interesting โ€” is that jaggedness itself: the wild inconsistency that makes todayโ€™s AI feel more like a force of nature than a reliable mind.

โ€œA true general intelligence system shouldnโ€™t have that kind of jaggedness.โ€

What strikes me about Hassabisโ€™s framing is how it reorients the conversation.

We have spent years debating whether AI is โ€œintelligent.โ€ His point is more subtle: intelligence without consistency is not yet wisdom. A system that is brilliant and brittle in equal measure is something genuinely new in the world โ€” not human, not the robots of science fiction, but a third thing we donโ€™t yet have good language for.

The road from jagged to coherent is, I suspect, the central engineering and philosophical challenge of the next decade.

Continual learning means systems that grow with us. Long-term planning means systems that can be trusted with consequential goals. Consistency means systems whose judgment we can actually rely on.

Until then, we are working with something that resembles a prodigy โ€” dazzling, occasionally humbling, and not yet quite whole.

Questions to Consider

  1. The Consistency Problem: If you knew an AI system could solve a problem brilliantly 90% of the time but fail unpredictably the other 10%, how would that change the decisions youโ€™d trust it to make?
  2. Frozen in Time: What does it mean that the systems we rely on most are, at their core, educated in the past and unable to learn from the present? What human analog does that bring to mind?
  3. Jagged vs. General: Hassabis draws a line between โ€œjagged intelligenceโ€ and โ€œgeneral intelligence.โ€ Do you think general intelligence is the right destination โ€” or is there value in systems that are deeply specialized, even if inconsistent?
  4. The Savant Question: Weโ€™ve always had a complicated relationship with uneven genius in humans. Does the โ€œjagged AIโ€ problem feel categorically different to you, or just a new version of an old puzzle?
Categories
AI India

The Polyglot Machine

There is a subtle but profound shift happening in the global architecture of artificial intelligence. For the past few years, the gravitational pull of the AI revolution has been overwhelmingly centralizedโ€”anchored in the server farms and venture capital boardrooms of Silicon Valley. But if you look closely at the horizon, the center of gravity is beginning to disperse.

Activity in India’s AI ecosystem is accelerating (witness this weekโ€™s India AI Impact Summit in Delhi), and it feels less like a replication of what weโ€™ve seen in the West and more like an entirely new paradigm.

Take Sarvam AI, for example. What strikes me about their approach isnโ€™t just the technical ambition of building foundation models, but the philosophical underpinning of why they are building them. They are focusing heavily on Indic languages. This is not a trivial detail; it is the crux of the matter.

“We often forget that language is the original operating system of human culture. It shapes how we think, how we empathize, and how we conceptualize reality.”

When the foundational models of artificial intelligence are trained overwhelmingly on English, they inadvertently inherit a distinctly Western worldview. They learn the biases, the idioms, and the cultural frameworks of a specific slice of humanity, leaving the rest of the world to interact with technology through a translation layer that often strips away nuance.

India, a nation woven together by dozens of distinct languages and thousands of dialects, presents the ultimate crucible for AI. What happens when a machine doesn’t just translate, but actually “thinks” and generates natively in Hindi, Tamil, or Bengali?

The rise of AI in India represents a push for digital and cultural sovereignty. It is a recognition that the future of technology cannot be a monolith. For AI to truly serve humanity, it must reflect the pluralism of humanity. It must understand the local context, the regional slang, and the deeply rooted cultural histories that define how people live and work.

Watching companies like Sarvam AI pick up momentum reminds me that the next great frontier in technology isn’t just about achieving higher parameters or faster compute times. Itโ€™s about representation. The models that will truly change the world won’t just be the smartest; they will be the most deeply attuned to the beautiful, noisy, and diverse chorus of the human experience.

Categories
AI Work

Surviving Our Own Success: The Existential Shift of the AI Era

We are standing on the precipice of a profound shiftโ€”not just in how we work, but in what work actually means to us. Sam Harris talks about it here. Itโ€™s disturbing in many ways!

Lately, the cultural conversation has been thick with a specific kind of anxiety. The rising tide of concern around artificial intelligence and job displacement isn’t merely an economic panic; it is an existential one. For a long time, we comforted ourselves with the idea that the timeline for artificial general intelligence (AGI) was measured in decades. It was a problem for our children, or perhaps our grandchildren, to solve. But as recent discussions among tech leaders highlight, that timeline is compressing rapidly. We are now hearing serious projections that within the next 12 to 18 months, “professional-grade AGI” could automate the vast majority of white-collar, cognitive tasks.

“For centuries, human beings have defined themselves by the friction of their labor.”

We introduce ourselves with our job titles at dinner parties. We measure our worth by our productivity, our outputs, and the unique skills weโ€™ve honed over decades. We willingly incur hundreds of thousands of dollars in student debt to secure a spot on the bottom rung of the corporate ladder, believing that with enough effort, we can climb it.

But suddenly, we are faced with the reality that the ladder isn’t just missing a few rungs; it is evaporating entirely.

Here lies one of the great ironies of our modern age: we always assumed the robots would come for the physical labor first. We pictured automated plumbers, robotic janitors, and android mechanics. Instead, they are coming for the thinkers. They are coming for the lawyers drafting contracts, the accountants crunching tax codes, the marketers writing copy, and the software engineers writing the very code that powers them. The high-status cognitive work we prized so deeplyโ€”the work we built our entire educational infrastructure aroundโ€”turns out to be the easiest to replicate in silicon.

When a machine arrives that can mimic, accelerate, or entirely replace that friction, the foundation of our identity begins to tremble. We are moving from a world where we are the engines of creation to a world where we are merely the editors of it. A single person might soon do the work of a thousand, spinning up autonomous AI agents to execute entire business strategies, architect software, and manage logistics in a single afternoon.

Yet, as terrifying as this sounds, the most startling realization isn’t a dystopian fear of rogue machines or cyber terrorism. Itโ€™s that this massive economic disruption is actually what success looks like. This isn’t the failure mode of AI; this is the technology working exactly as intended, ushering in an era of unprecedented productivity and, theoretically, boundless abundance.

The emergency we face is that our social and economic systems are entirely unprepared for a reality where human labor is optional. We are witnessing what some have described as a “Fall of Saigon” moment in the tech and corporate worldsโ€”a frantic scramble where a few founders and final hires are grasping at the helicopter skids of stratospheric wealth before the need for human employees vanishes. If we are truly approaching a future where human labor is obsolete, how do we share the wealth generated by these ubiquitous systems?

Perhaps there is a quiet grace hidden within this disruption. If AI takes over the mechanical, the repetitive, and the cognitive synthesis, it leaves us with the deeply, undeniably human. It forces us to lean into the things an algorithm cannot compute: empathy, lived experience, moral judgment, and the beautiful, messy reality of physical presence.

The future of work might not be about competing with machines at all. It forces us to confront the terrifying, beautiful question: Who are we when we don’t have to work? It is an invitation to finally separate our human worth from our economic output, and to redesign a society that shares the wealth of our own invention. We are entering an era of abundance. The only question is whether we have the collective imagination to survive our own success.

Questions to Ponder

  1. If your job title was erased tomorrow, how would you define your value to the world?
  2. How do we build a society that rewards human existence rather than just economic output?
  3. What is one deeply human skill or passion you would cultivate if you no longer had to work for a living?
Categories
AI Work

The Centaurโ€™s Dilemma: What Chess Teaches Us About the AI Era

Note: this post was stimulated by a recent conversation between Dario Amedei and Ross Douthat.

In 1998, Garry Kasparov did something unexpected after his historic defeat to IBMโ€™s Deep Blue: he teamed up with the machine. He pioneered “Centaur Chess,” a hybrid format where human intuition merges with cold, silicon calculation. The human acts as the executive, the engine as the raw horsepower. For a time, it was the highest level of chess ever played.

But there is a sobering lesson hidden in the evolution of this game. We are currently living through the workforce equivalent of the Centaur era, and history suggests our “hybrid honeymoon” won’t last forever.

Right now, we are in the augmentation phase. A junior copywriter or coder armed with a Large Language Model can suddenly produce work at a staggering pace. The AI acts as a great equalizer, much like a mediocre chess player with a strong engine beating a Grandmaster in the early 2000s. We are shifting into executive rolesโ€”prompting, curating, and orchestrating rather than creating from scratch.

However, in modern Centaur Chess, a chilling reality has emerged: human intervention now yields negative returns. The engines have become so impossibly advanced that when a human overrides Stockfish today, they are almost certainly making a mistake. The human loop, once the ultimate strategic advantage, has become a liability.

This is the “Grandmaster Floor” problem, and it is coming for the job market.

“Eventually, companies may view human oversight not as a ‘value add,’ but as an insurance cost theyโ€™d rather cut.”

We are seeing this fracture already. Pure “engine” industriesโ€”entry-level data analysis, logistical tracking, basic customer supportโ€”are rapidly phasing out the human element because human latency is a drag on the system. Yet, in fields requiring high-stakes moral judgment or empathy, like healthcare or law, the Centaur model remains deeply necessary.

This forces a deeply personal question: How do we stay relevant when the engine eventually solves the game?

The answer lies in recognizing the boundaries of the board. Chess is a closed, finite system. Human life and business are open, messy, and infinitely complex. The survival strategy isn’t to compete on calculation, but to double down on connection, empathy, and problem definition. AI is brilliant at providing the perfect answer, but it fundamentally lacks the soul to know which questions are worth asking.

In the future, the human touch won’t just be a necessity; it will be a luxury. The most valuable skill won’t be navigating the engine, but deciding where the engine should go.

A couple of considerations:

โ€ข Take an honest look at your daily work: how much of your time is spent “calculating” (tasks an engine will soon do better) versus “evaluating” (deciding what actually matters)?

โ€ข If the technical, process-driven aspects of your job were completely automated tomorrow, what uniquely human valueโ€”empathy, context, or connectionโ€”would you still bring to the table?

Categories
AI Business Work

The Curator of Intent

I have always found a certain comfort in the “clatter” of a digital workday. Itโ€™s that specific, rhythmic hum of a mind in motionโ€”the clicking of a mechanical keyboard, the invisible friction of parsing a difficult paragraph or balancing a complex budget. For years, weโ€™ve treated this white-collar grind as our intellectual sanctuary.

But Mustafa Suleyman, now steering Microsoft AI, recently laid out a timeline that suggests the sanctuary walls are evaporating.

From an article in the Financial Times:

โ€œWhite-collar work, where youโ€™re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person โ€” most of those tasks will be fully automated by an AI within the next 12 to 18 months,โ€ Suleyman said.

This isn’t just about efficiency; itโ€™s about a fundamental shift in the “professional grade.” We are entering the era of the autonomous agentโ€”AI that doesn’t just wait for a prompt but “coordinates within workflows,” learns from its environment, and acts. Just ask any programmer that you know how AI is impacted their daily grind.

If Suleyman is correct, the “knowledge worker” is about to undergo a forced evolution. When the “doing” is handled by an agent that can learn and improve over time, what remains for the human? Will the models actually be able to learn from each of us in a personalized way – like an intern learns from her mentor?

โ€œCreating a new model is going to be like creating a podcast or writing a blog,โ€ he said. โ€œIt is going to be possible to design an AI that suits your requirements for every institutional organisation and person on the planet.โ€

It seems like our primary job description shifts from “Expert,” but “Curator of Intent.” We aren’t the ones finding the answers anymore; we are just the ones responsible for asking the right questions.

The next 18 months won’t just be a test of our technology, but a test of our egos. We have to learn to find our value not in the work we produce, but in the vision we hold and the questions we ask. We are shedding the “task” to save the “craft.” I just hope we remember the difference.


As we move toward this curated future, Iโ€™m left with a few questions I canโ€™t quite shake. Iโ€™d love to hear your thoughts:

  1. The Wisdom Gap: Can you truly be a “Curator of Intent” without having ever been a “Doer of Tasks”? If we skip the apprenticeship of the mundane, where does our intuition come from?
  2. The Metric of Value: If output becomes “free,” how should we measure a human’s value in a professional setting?
  3. The Line in the Sand: Is there a part of your workflow you would refuse to automate, even if an AI could do it better?