Categories
AI AI: Large Language Models Programming

The Era of the Synthesizer: How AI Is Liberating the Coder

For decades, being a programmer meant being a translator.

You stood in the gap between what someone wanted and what a machine could understand. You learned the syntax. You memorized the libraries. You once spent three hours hunting a missing semicolon that turned out to be hiding in line 847 of a file you were sure youโ€™d already checked.

The New York Times Magazine recently ran a piece by Clive Thompson on what AI coding assistants โ€” models like Claude and ChatGPT โ€” are doing to that job. The anxiety in the piece is real. When you sit down with a modern AI assistant and watch it generate in seconds what used to take you days, itโ€™s genuinely disorienting. Hard-won expertise suddenly feels less like a moat and more like a speed bump.

That reaction is honest. Iโ€™d be suspicious of anyone who didnโ€™t feel it.

But hereโ€™s what I keep coming back to: what weโ€™re losing is the translation layer. The boilerplate. The muscle memory of syntax. What weโ€™re not losing is the part that was always the actual job โ€” figuring out what to build and why it matters.

The soul of software was never in the code itself. The code was always just a means to an end.

Think about what happens when the mechanical friction of a craft disappears. Photographers stopped having to mix their own chemicals in the dark and started spending that time making better images. Musicians stopped having to hand-copy scores and started composing more. The freed-up capacity doesnโ€™t evaporate โ€” it gets redirected upward, toward the work that actually required a human all along.

The same shift is underway in software. When the AI handles the loops and the boilerplate and the database queries, whatโ€™s left is everything that required judgment in the first place. The architecture. The user experience. The question of whether this thing should exist at all, and in what form, and for whom.

Weโ€™re moving from the how to the why. Thatโ€™s not a demotion.

It does ask something of us, though. The old identity โ€” programmer as master of arcane syntax โ€” has to be relinquished. And letting go of a hard-earned identity is genuinely hard, even when whatโ€™s replacing it is better. That quiet grief the Times piece captures is worth sitting with, not dismissing.

But after you sit with it for a minute: we are entering the era of the synthesizer.

The synthesizerโ€™s job is to hold the vision, curate the logic, and direct the output toward something that actually resonates with another human being. Empathy. Intuition. The ability to sense when something is almost right and know which direction to push it. These arenโ€™t soft skills. Theyโ€™re the whole game now.

The clatter of keyboards is fading. But the music weโ€™re about to make โ€” with AI doing the heavy lifting on the mechanics โ€” has a lot more room to breathe.

Categories
AI AI: Large Language Models AI: Prompting

Using a Developmental Editor

I recently came across an X post by @heyrimsha where she shared several AI prompts that looked quite interesting.

One of them in particular was based on the notion of having a “developmental editor” from a major publication (like the Harvard Business Review etc.) reviewing content and suggesting ways to make the content “better”.

Obviously “better” is in the eye of the beholder but I wanted to give it a try. Here are my results using Gemini 3 Fast on my post from yesterday about John F. Burns.

The prompt:

I'm going to give you a draft of one of my blog posts below. You are to act as a world-class developmental editor for a major publication (like _The Atlantic_ or _Harvard Business Review_). 

Read my draft and identify:

1. **The 'So What?' Factor:** Where is the argument too generic or 'safe'?

2. **Borrowed Authority:** Where am I using quotes as a crutch instead of developing my own original ideas?

3. **The Narrative Thread:** Is my personal history a list of facts, or a cohesive story?

After the critique, provide a 'Masterwork Rewrite' that heightens the stakes, removes the fluff, and gives the piece a unique, proprietary framework (like 'The Seasonal Soul')."

Here's the post: [POST]

Here’s Gemini’s response which includes a suggested rewrite at the end:

This is a poignant tribute to a journalistic titan. Youโ€™ve captured the “vibe” of John F. Burns perfectlyโ€”the Burberry raincoat, the smell of cordite, and the steady hand at the keyboard.

However, as an editor for The Atlantic or HBR, my job is to push you past “tribute” and into “thesis.” Currently, this reads like a very well-written appreciation. To make it a major publication piece, we need to turn your reflection on Burns into a diagnostic of our current cultural moment.

1. The ‘So What?’ Factor: The Risk of Nostalgia

The Critique: Your argument is currently: “John Burns was a great reporter from a better era, and I miss that.” This is a “safe” sentiment. Most readers over 40 agree that the “pre-internet era” had more gravitas. The Fix: You need to answer why this matters now, beyond just missing a legendary writer. The “So What” isn’t that Burns is gone; itโ€™s that the “Institutional Anchor” he providedโ€”the ability to remain “straight” while the world tiltsโ€”is a lost technology we are now desperate to reinvent.

2. Borrowed Authority: The Quote as a Shield

The Critique: You use Burnsโ€™ quotes (the “gold standard,” “missionary complex,” “sweep of history”) to do the heavy lifting. You are describing his excellence using his own definitions. The Fix: Donโ€™t just quote his “gold standard.” Define what that standard looks like in the wild. Instead of saying he avoided the “missionary complex,” analyze whythat complex is the dominant toxin in modern discourse. Use your own voice to categorize his genius.

3. The Narrative Thread: The “Passive Reader” Problem

The Critique: Your personal history here is a bit “list-like”: I read the obits, I remember his retirement, I read this specific obit today. It positions you as a passive consumer of his work. The Fix: Link your personal evolution to his reporting. Did his dispatch from Sarajevo change how you viewed a specific conflict? Did his “considered writing” influence how you built your own “civilized center” (to use your phrase about his wife, Jane)? Turn the “list of memories” into a “narrative of influence.”


Masterwork Rewrite: The Architecture of the Straight Line

The Framework: “The Architecture of the Straight Line” This framework posits that in a world of ideological “curves” and “slants,” the hardest thing to build is a straight line of fact. It moves the focus from the man to the method.

The Draft:

We are currently living through a Great Slant. In our rush to be “right”โ€”morally, politically, or algorithmicallyโ€”we have abandoned the rarest craft in the human repertoire: the ability to keep the paper straight.

I spent my morning submerged in the obituary of John F. Burns, the longtime New York Times foreign correspondent. To many, Burns was the last of the “firemen,” a man who appeared in the worldโ€™s most fractured zip codes wearing a Burberry raincoat and a look of studied detachment. But as I traced the arc of his careerโ€”from the sniper alleys of Sarajevo to the blast-walled compounds of Baghdadโ€”I realized I wasn’t just mourning a reporter. I was mourning a specific type of sanity.

Burns practiced what I call The Architecture of the Straight Line. In todayโ€™s media, we suffer from the “Missionary Complex”โ€”a term Burns himself used to describe the urge to save the world rather than see it. Modern reporting is often a performance of empathy or an exercise in tribal signaling. We no longer want the “telling detail”; we want the detail that tells our side of the story.

Burns was different. He understood that the only way to survive the “chaos of war” was to build a “civilized center.” We see this literally in his partnership with his wife, Jane Scott-Long, who turned a derelict house in Baghdad into a fortress of state-of-the-art coffee and blast walls. But Burns did this intellectually, too. While the world was screaming in ideological shorthand, he was looking for the “sweep of centuries” in the reburial of a King or the notes of a lone cellist.

This is the “So What” of his passing: We have traded Considered Writing for Instant Reaction.

In the pre-internet era Burns inhabited, the time-zone differential acted as a natural filterโ€”a mandatory pause that allowed the “blue pencil” of the editor to catch the slant before it hit the page. Today, that filter is gone. We are drowning in the “instant, unvetted roar.” We have plenty of missionaries, but very few architects.

Reading Burns taught me that keeping the paper straight isn’t a passive act of “objectivity.” It is a violent, daily struggle against one’s own biases. It is the refusal to let the heat of the moment melt the integrity of the record.

As the obituary section grows more familiar to me, I find myself looking for more than just a list of achievements. I am looking for a compass. John Burns kept his bearings not because he ignored the fire, but because he was the only one in the room who understood that you cannot describe the flames if you are busy trying to stoke them.

The “Straight Line” is a lost art. Itโ€™s time we started rebuilding it.

Categories
AI AI: Large Language Models

The Echo Effect: Why Prompt Repetition is AI’s Best Kept Secret

In our relentless pursuit of complexity, we often overlook the elegant simplicity of a fundamental human habit: repeating ourselves.

We build colossal architectures, weave intricate neural networks, and throw mountains of computational power at our artificial intelligence systems, hoping to squeeze out a few more drops of reasoning and logic. Yet, sometimes the most profound breakthroughs require no new code, no additional latency, and no extra training data.

Sometimes, you just have to say it twice.

In a fascinating December 2025 paper titled Prompt Repetition Improves Non-Reasoning LLMs,” researchers Yaniv Leviathan, Matan Kalman, and Yossi Matias uncovered an almost absurdly simple “free lunch” in AI optimization.

Their premise is straightforward: when you aren’t using a heavy reasoning model, simply copying and pasting your input prompt multiple times significantly boosts the model’s performance.

“When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency.”

The mechanics behind this are elegantly pragmatic.

By repeating the prompt, you are moving the heavy computational lifting to the parallelizable “pre-fill” stage of the model’s processing. The AI’s causal attention mechanism gets to process the same tokens again, allowing the later iterations of the prompt to attend to the earlier ones. It effectively acts as a hack to simulate bidirectional attention in a decoder-only architecture.

What’s even more telling is the paper’s observation on why this works so well.

The researchers noted that models trained with Reinforcement Learning (like OpenAI’s deep-thinking variants) naturally learn to “restate the problem” in their internal monologue. They figured out on their own what these researchers are suggesting we do manually: repeat the question to focus the mind.

Reading this paper, I couldn’t help but draw a parallel to the human condition and the nature of listening.

How often do we assume that because we have articulated a thought once, it has been fully absorbed? We fire off a single, dense instruction to a colleague, a partner, or a friend, and then marvel when the nuance is lost in translation.

We suffer from our own attention bottlenecks.

Like a non-reasoning LLM trying to parse a complex query in a single pass, we are constantly bombarded with a stream of tokensโ€”emails, notifications, conversations, fleeting thoughts. To truly understand, to truly digest and synthesize information, we need the grace of repetition.

There is a strange poetry in the fact that to make our most advanced digital minds smarter, we have to talk to them the way we talk to a distracted child or a busy spouse. The “microscope effect” highlighted in the studyโ€”where repeating a prompt drastically improved extraction tasksโ€”shows that the failure wasn’t in the model’s capacity to know, but in its capacity to focus. Repetition forces focus. It creates a resonant echo in the context window, a digital highlighter that screams, โ€œThis matters. Look here again.โ€

As we continue to navigate a world increasingly augmented by artificial intelligence, this paper serves as a humbling reminder. The bleeding edge of technology isn’t always found in the most complex equation; sometimes, it’s hidden in the most basic principles of communication.

Whether you’re prompting a billion-parameter language model or trying to connect with the human sitting across from you, the lesson is clear.

Clarity isn’t just about the words you choose. It’s about giving those words the space, the resonance, and the repetition they need to be truly understood.

Say it once to be heard; say it twice to be understood.

Categories
AI AI: Large Language Models

The Architecture of Unpredictability

There is a special understanding that comes from looking too closely at a map of a massive network or a large city. There is a point where the individual components vanish, and something elseโ€”something “other”โ€”takes over.

Niall Ferguson captures this beautifully in The Square and the Tower:

“Large networks are complex systems which have โ€˜emergent propertiesโ€™ โ€“ the tendency of novel structures, patterns and properties to manifest themselves in โ€˜phase transitionsโ€™ that are far from predictable.”

We like to believe we are the architects of our systems. We build platforms, we codify laws, and we design cities with the intent of order.

But Ferguson points out that once a network crosses a certain threshold of complexity, it enters a state of “phase transition.” Itโ€™s like water reaching 100ยฐC; it doesnโ€™t just get “hotter”โ€”it becomes steam. It changes its fundamental nature.

We see this most vividly today in the trajectory of Artificial Intelligence. An LLM is, at its core, a gargantuan network of weights and probabilities. We understand the math of the individual neuron, yet we cannot fully explain how, at a certain scale, these systems begin to exhibit reasoning, humor, or theory of mind. These are not explicitly programmed “features”; they are emergent propertiesโ€”the ghost that moves into the machine once the network becomes sufficiently dense.

Dario Amodei, CEO of Anthropic, describes this phenomenon through the lens of scaling:

“The thing that is so surprising about these models is that as you scale them up, they just keep getting better at things you didn’t explicitly train them to doโ€ฆ thereโ€™s this sense in which the model is ‘learning’ the structure of the world just by being forced to predict the next word.”

This is the “emergent property.” It is the intelligence of the beehive that no single bee possesses. It is the sudden, viral revolution that no single activist could have ignited. These properties are far from predictable because they don’t live in the nodes of the network; they live in the relationships between them.

The philosophical weight of this is humbling. It suggests that our world is governed by a structural momentum that defies linear logic.

When we find ourselves in these moments of societal or personal transition, perhaps the goal isn’t to control the outcome, but to understand the new physics of the system weโ€™ve helped create.

We aren’t just parts of the network; we are the medium through which the unpredictable manifests.


Questions to Ponder

  • If your own consciousness is an emergent property of your neural network, where does “you” actually reside?
  • In the social networks we inhabit daily, what properties are emerging that we haven’t yet named?
  • As AI continues its phase transition, are we creating a tool, or are we witnessing the birth of a new kind of physics?
Categories
AI AI: Large Language Models

Kimi K2.5

I tried a few conversations with Kimi K2.5 last night and it is pretty impressive. It remains to be seen how the competition between models – especially open weight / open source models – will end up! Rumors are swirling that DeepSeek may release an updated model next month. Weโ€™ll see!

I asked Grok for the consensus โ€œbuzzโ€ on the release of Kimi K2.5. Hereโ€™s what Grok reported this morning:

The consensus buzz around Kimi 2.5 (more precisely Kimi K2.5, from Moonshot AI) is extremely positive, especially in the open-source AI community, on platforms like Reddit (r/LocalLLaMA, r/singularity), X, Hugging Face, and tech news outlets. It was released very recently (around January 27, 2026), so the excitement is fresh and intense.

It’s widely hailed as one of the most powerful open-source models available right nowโ€”a native multimodal (text + vision, including strong video understanding and “visual coding”) Mixture-of-Experts model with ~1 trillion total parameters (32B active), 256K context, native INT4 quantization for efficiency, and standout agentic capabilities.

Key highlights driving the hype:

  • Benchmark dominance โ€” It outperforms or beats models like GPT-5.2, Claude 4.5 Opus/Sonnet, Gemini variants, and other frontier models in several areas, including:
  • Humanity’s Last Exam (HLE): ~50-51.8% with tools (strong agentic reasoning)
  • SWE-Bench Verified (coding): Top scores, often leading in visual-to-code and software engineering tasks
  • Multimodal benchmarks (MMMU Pro, VideoMMMU, OCR-heavy tasks): State-of-the-art or near-SOTA without fine-tuning
  • Math/reasoning (AIME, GPQA-Diamond, etc.): Frequently matches or exceeds closed models
  • Standout features โ€” “Agent Swarm” (beta) lets it orchestrate up to 100 sub-agents in parallel for complex tasks, handling 1,500+ tool calls and reportedly cutting execution time by up to 4.5x on long-horizon workflows. This is seen as a big step forward in agentic AI. Visual coding is a killer demo: upload a video/screen recording of a website/interface, and it clones it (including UX interactions) in one shotโ€”people are calling this transformative for “vibe coding” and rapid prototyping.
  • Open-source advantage โ€” Weights on Hugging Face under a Modified MIT license (permissive for most uses). It’s accessible via APIs (OpenAI-compatible) on platforms like Moonshot’s site, Together AI, NVIDIA NIM, and othersโ€”often at lower cost than Western closed models. Many see this as China continuing to close the gap (or pull ahead in open releases).

Community sentiment:

  • Enthusiastic praise for being “the strongest open-source model” in multimodal/agentic domains, with comments like “huge leap,” “crushing benchmarks,” and “changing the game” for coding, visual analysis, and tool-augmented work.
  • Some skepticism exists (as with any new model): questions about real-world vs. benchmark performance, potential optimization for evals, or occasional long-context quirks in earlier Kimi versions. A few users report mixed results in specific tools (e.g., GitHub Copilot integrations), but overall feedback leans heavily positive, especially for reasoning, creative tasks, and agentic flows.
  • It’s fueling discussions about open-source momentum, with parallels drawn to DeepSeek, Qwen, and othersโ€”positioning Moonshot as a top Chinese AI player.

In short, the vibe is “this is a massive win for open AI” and “watch out closed models”โ€”with developers rushing to test it for coding agents, visual-to-code projects, and multi-agent setups. If you’re into frontier open models, it’s the hot topic right now.

Categories
AI AI: Large Language Models AI: Prompting

Liquid Software and the Death of the “User”

There is a profound disconnect in how we talk about Artificial Intelligence right now. In the boardrooms of legacy corporations, AI is a “strategy” to be committee-reviewedโ€”a tentative toe-dip into efficiency. But on the ground, among the “AI natives,” something entirely different is happening. AI isn’t just making the old work faster; it is fundamentally changing the texture of what we build and how we think.

In a recent conversation, Reid Hoffman and Parth Patil explored this shift, and the metaphor that struck me most was the idea of software becoming “liquid.”

The Era of Liquid Software

For decades, we have treated software like furniture. We buy a CRM, a project management tool, or an analytics dashboard. It is rigid, finished, and distinct from us. We are the users; it is the tool. But Patil demonstrates a different reality: one where he drops a folder of raw CSV files into an agent like Claude Code and asks it to “look at the data and build me a dashboard.”

Sixty seconds later, he has a fully functional, interactive HTML dashboard. He didn’t buy it. He didn’t spend three weeks coding it. He simply willed it into existence for that specific moment.

This is “vibe coding.” Itโ€™s a term that sounds almost dismissive, but it represents a radical democratization of creation. You no longer need to know the syntax of Python to build a tool. You just need to know the “vibe”โ€”the outcome you want, the logic of the problem, and the willingness to dance with an intelligent agent until it manifests.

The philosophical implication here is staggering. We are moving from a world of scarcity of capability to a world of abundance of cognition. When you can spin up a custom tool for a single week-long project and then discard it, the friction of problem-solving evaporates. The “app” is no longer a product you buy; itโ€™s a transient artifact you summon.

Applying the “Vibe Code” Mindset

But how do we, especially those of us who don’t identify as “technical,” bridge the gap between watching this magic and wielding it? The conversation offers a roadmap. It starts by shedding the identity of the “user” and adopting the identity of the “orchestrator.”

If you want to move from passive observation to active application, here are three specific ways to start:

1. The “Interview Me” Protocol

We often stare at the blinking cursor, unsure how to prompt the AI. Hoffman suggests a reversal: Make the AI the interviewer. When you face a complex leadership challenge or a strategic knot, open your frontier model (Claude, GPT-4o, etc.) and say:

“Interview me about this problem until you have enough information to propose a framework or solution.”

This forces you to articulate your tacit knowledge, which the AI then structures into something actionable. It turns the monologue into a Socratic dialogue.

2. Build “Throwaway” Internal Tools

Stop looking for the perfect SaaS product for every niche problem in your team. If you have a messy recurring taskโ€”like organizing client feedback or synthesizing weekly reportsโ€”try “vibe coding” a solution. Use a tool like Replit or Cursor. Upload your messy data (anonymized if needed) and tell the agent:

“Write a script to organize this into a table based on sentiment.”

Don’t worry if the code is ugly. Don’t worry if you throw it away next month. The value is in the immediacy of the solution, not the longevity of the code.

3. Transform Meetings into Data

Meetings are usually where knowledge goes to die. They are ephemeral. But if you transcribe them (with permission), they become data. Don’t just ask for a summary. Feed the transcript to an agent and ask:

“Who should we have consulted on this decision that wasn’t in the room?”
“Create a decision matrix based on the arguments presented.”

This turns a passive event into an active, queryable asset.

Conclusion

The danger, as Hoffman notes, is the “secret cyborg”โ€”the employee who uses AI to do their job in two hours and spends the rest of the week hiding. But the real win comes from the amplified team, where we share these “vibe coded” tools and prompts openly.

We are entering an age where your imagination is the only true constraint. If you can describe it, you can increasingly build it. The question is no longer “is there an app for that?” but “can I describe the solution well enough to bring it to life?”

Categories
AI AI: Large Language Models Investing

The Ledger of Curiosity

We often romanticize the “back of the napkin” idea. It is the symbol of spontaneous geniusโ€”the startup mapped out in a coffee shop, the ticker symbol hurriedly scribbled during a dinner party. But we rarely talk about what happens to the napkin afterwards.

Usually, it gets thrown away. Or lost. Or stuffed into a drawer, becoming just another artifact of a fleeting thought that had momentum but no direction.

In the first two parts of this experiment, I used Gemini 3 Pro to solve the friction of entry (transcribing my messy handwriting) and the friction of analysis (stress-testing the ideas against 10-K realities). But there was one final gap: Permanence.

An analysis that lives and dies in a chat window is barely better than one that lives and dies in a notebook. It is still ephemeral. To truly build a “Second Brain” for investing, the data needs to leave the conversation and enter a system.

“The goal of technology should be to stop us from losing the work we’ve already done.”

I tweaked my workflow one last time. I asked the AI to not just judge the stocks, but to format its judgment into a raw CSV block.

With a simple copy-paste, my handwritten scribble wasn’t just digitized; it was database-ready. It went from a piece of paper to a row in Google Sheets with columns for “Market Cap,” “P/E Ratio,” and “Primary Risk.”

Suddenly, I wasn’t just looking at a list; I was building a ledger. I can now track these ideas over months. I can see if the “Red Flag” the AI identified actually played out. I can measure my own batting average.

The goal of technology shouldn’t just be to make us faster at doing work. It should be to stop us from losing the work we’ve already done. By turning ink into data, we stop treating our ideas as disposable. We give them the respect of memory.

Categories
AI AI: Large Language Models Investing

The Digital Devil’s Advocate

There is a seduction in the handwritten note. When I scribble down a company name in a notebook, it is purely additive. It represents potential upside, a future win, a brilliant insight caught in ink. The notebook is a safe harbor for optimism because it lacks a “Reply” button. It doesn’t argue back.

But optimism is an expensive luxury in investing.

After my initial experimentโ€”using Gemini 3 Pro to transcribe my messy list into tickersโ€”I felt a surge of productivity. But productivity is not the same as discernment or understanding. I had a list of stocks, but I didn’t have a thesis. I just had digitized hope.

So, I took the next step. I didn’t ask the AI for validation; I asked for a fight. I fed the tickers back into the model with a specific directive: “Act as a contrarian hedge fund analyst. Find the red flags. Kill my enthusiasm.”

“I didn’t ask the AI for validation; I asked for a fight.”

The results were immediate and sobering. The “promising tech play” I had noted? The AI highlighted a massive deceleration in user growth hidden in the footnotes of their latest 10-Q. The “stable dividend payer”? It flagged a payout ratio that was mathematically unsustainable.

In seconds, the warm glow of my handwritten discovery was doused with the cold water of 10-K realities. And it was fantastic.

We often view AI as a tool for creationโ€”generating text, images, and code. But its highest leverage application might actually be destruction. By using it to stress-test our assumptions, we outsource the emotional labor of being the “bad cop.” It allows us to kill bad ideas quickly, cheapy, and privately, before we pay the market tuition for them.

My notebook is still where the dreams live. But the digital realm is now where they go to survive the interrogation.

Categories
AI AI: Large Language Models Investing

From Ink to Insight

There is a distinct friction that exists between the analog world and the digital one. For years, analog notebooks have been the graveyard of good intentionsโ€”lists of books to read, article ideas to write, and companies to investigate, all trapped in the amber of my barely legible handwriting.

I recently found myself looking at one of these lists: a scrawl of company names I had jotted down while reading an article discussing possible companies for investment in 2026. Usually, this is where the work beginsโ€”taking my handwritten notes, typing them out one by one, searching for tickers, opening tabs, etc. It is low-value administrative work that often kills any spark of curiosity before it can turn into useful analysis.

“The barrier to entry for deep research drops to the time it takes to snap a photo.”

On a whim, I snapped a photo and uploaded it to Gemini 3 Pro. “Transcribe this,” I asked. “Give me the tickers.”

I expected errors. My handwriting is, to put it mildly, not easy to read (even for me!).

Instead, the AI didn’t just perform Optical Character Recognition (OCR); it performed contextual recognition. It understood that the scribble resembling “Apl” in a list of businesses was likely Apple, and returned $AAPL. It deciphered the intent behind the ink.

But the real shift happened when I asked Gemini to pivot immediately into research. Within seconds, I went from a static piece of paper to a dynamic analysis of P/E ratios, recent news, and market sentiment. The friction was gone.

This experience wasn’t just about productivity; it was about the fluidity of thought. We are moving toward a reality where the interface between the physical world and digital intelligence is becoming permeable. When the barrier to entry for deep research drops to the time it takes to snap a photo, our curiosity is no longer limited by our patience for data entry. We are free to simply think.

Categories
AI AI: Large Language Models

The Texture of Autonomy

There is a distinct texture to working with a truly capable person. It is a feeling of relief, specific and profound.

When you hand a project to a junior employee who “gets it,” the mental load doesn’t just decrease; it vanishes. You don’t have to map the territory for them. You don’t have to pre-visualize every stumble or correct every navigational error. You simply point to the destination, and they find their way.

I was thinking about this feelingโ€”this specific brand of professional trustโ€”when I read a recent observation from two partners at Sequoia regarding the current state of Artificial Intelligence:

“Generally intelligent people can work autonomously for hours at a time, making and fixing their mistakes and figuring out what to do next without being told. Generally intelligent agents can do the same thing. This is new.”

The phrase that sticks with me is “without being told.”

For the last forty years, our relationship with computers has been strictly transactional. The computer waits. We command. It executes. Even the most sophisticated algorithms have essentially been waiting for us to hit “Enter.” They are tools, no different in spirit than a very fast abacus or a hyper-efficient typewriter.

But we are crossing a threshold where the software stops waiting.

The definition of intelligence in a workspace isn’t just raw processing power; it is the ability to recover from failure without supervision. It is the capacity to run into a wall, realize you have hit a wall, back up, and look for a doorโ€”all while the manager is asleep or working on something else.

When Sequoia notes that “this is new,” they aren’t talking about a feature update. They are talking about a shift in the ontology of our tools. We are moving from an era of leverage (tools that make us faster) to an era of agency (tools that act on our behalf).

This changes the psychological contract between human and machine. If an agent can “figure out what to do next,” we are no longer operators; we are managers. And as anyone who has transitioned from individual contributor to management knows, that is a fundamentally different skill set. It requires clearer intent, better goal-setting, and the ability to trust a process you cannot entirely see.

We are about to find out what it feels like to have a digital colleague that doesn’t just listen, but actually thinks about the next step.