Categories
AI India

Intelligence as a Public Good: India’s “AI ka UPI” Revolution

There is a recurring rhythm to human progress: a breakthrough is born as a luxury, matures into a commodity, and ultimately solidifies into infrastructure.

We saw it with electricity, we saw it with the internet, and in 2016, we saw India do it with money through the Unified Payments Interface (UPI). UPI took the friction out of digital finance, transforming it from a walled garden guarded by private banks into a digital public good.

Now, it appears India is attempting to do for intelligence what they did for payments.

The global narrative around Artificial Intelligence is currently dominated at one end by massive private moats. At the other end are various open source/open weight efforts.

Silicon Valley primarily approaches AI as a capital-intensive arms race. Trillion-dollar tech players ramp huge compute, train very large models, and rent out intelligence via by the drink APIs. This intelligence is a proprietary and monetized luxury.

Enter the “AI ka UPI” initiative and the IndiaAI Mission discussed by Ashwini Vaishnaw at this weekโ€™s India AI Impact Summit.

Instead of treating AI as a product to be sold, India is architecting it as a Digital Public Infrastructure (DPI). The government is doing the heavy liftingโ€”subsidizing the compute, curating population-scale datasets, and building foundational models.

Currently, they are making over 38,000 GPUs available to startups and researchers at around โ‚น65 (less than a dollar) an hour, a sheer fraction of the global cost. They are rolling out sovereign stacks like BharatGen and conversational models fluent in 22 regional languages.

“They are building an ‘orchestration layer’ for cognition.”

If a developer wants to build a voice-agent to help a rural farmer diagnose a crop disease, they don’t have to worry about the backend compute, the dataset acquisition, or paying a premium to a tech giant. They just plug into the public rails.

As I watch this unfold, I am struck by the philosophical shift it represents. We have become deeply conditioned to view AI through the lens of scarcity and subscription. But what happens when intelligence becomes a public utility?

It shifts the center of gravity of innovation. It becomes about who can solve the most acute, localized, human problems. The friction of creation drops to near zero. A bootstrapped team in a tier-two city can suddenly wield the same computational reasoning as a VC funded Silicon Valley startup.

There is also an element of sovereignty here. In the 21st century, relying on foreign infrastructure for your population’s cognitive processing seems akin to relying on a foreign nation for your electricity. True technological independence requires sovereign AIโ€”models trained on indigenous data, reflecting local culture, nuances, and values, rather than the implicit biases of others.

The implications could be staggering. We are moving from an era where AI is an elite tool to an era where it is the invisible, ubiquitous fabric of daily life for over a billion people.

The true measure of AI’s ultimate impact won’t be found in benchmark scores on a server farm. It will be found in the quiet dignity of a citizen accessing global markets through a vernacular voice assistant, or a rural clinic predicting patient outcomes with public compute.

I look forward to following Indiaโ€™s AI efforts as this and other AI initiatives are more clearly defined.

Questions to consider

1. The Value of Human Capital: If artificial intelligence becomes as ubiquitous, reliable, and cheap as public electricity, what uniquely human skills will become the new premium in a hyper-automated society?

2. Cognitive Sovereignty: How will the geopolitical landscape shift when emerging economies no longer need to import their “cognitive infrastructure” and inherent cultural biases from Western tech players?

3. The Centralization of Truth: When a government builds and curates the foundational AI models for over a billion people, where is the line between providing a democratized public good and engineering a centralized cultural narrative?

What else???

Categories
AI India

The Polyglot Machine

There is a subtle but profound shift happening in the global architecture of artificial intelligence. For the past few years, the gravitational pull of the AI revolution has been overwhelmingly centralizedโ€”anchored in the server farms and venture capital boardrooms of Silicon Valley. But if you look closely at the horizon, the center of gravity is beginning to disperse.

Activity in India’s AI ecosystem is accelerating (witness this weekโ€™s India AI Impact Summit in Delhi), and it feels less like a replication of what weโ€™ve seen in the West and more like an entirely new paradigm.

Take Sarvam AI, for example. What strikes me about their approach isnโ€™t just the technical ambition of building foundation models, but the philosophical underpinning of why they are building them. They are focusing heavily on Indic languages. This is not a trivial detail; it is the crux of the matter.

“We often forget that language is the original operating system of human culture. It shapes how we think, how we empathize, and how we conceptualize reality.”

When the foundational models of artificial intelligence are trained overwhelmingly on English, they inadvertently inherit a distinctly Western worldview. They learn the biases, the idioms, and the cultural frameworks of a specific slice of humanity, leaving the rest of the world to interact with technology through a translation layer that often strips away nuance.

India, a nation woven together by dozens of distinct languages and thousands of dialects, presents the ultimate crucible for AI. What happens when a machine doesn’t just translate, but actually “thinks” and generates natively in Hindi, Tamil, or Bengali?

The rise of AI in India represents a push for digital and cultural sovereignty. It is a recognition that the future of technology cannot be a monolith. For AI to truly serve humanity, it must reflect the pluralism of humanity. It must understand the local context, the regional slang, and the deeply rooted cultural histories that define how people live and work.

Watching companies like Sarvam AI pick up momentum reminds me that the next great frontier in technology isn’t just about achieving higher parameters or faster compute times. Itโ€™s about representation. The models that will truly change the world won’t just be the smartest; they will be the most deeply attuned to the beautiful, noisy, and diverse chorus of the human experience.

Categories
AI China

Whatโ€™s new in AI from China?

February 2026 is a busy time in AI coming out of China coinciding with the Chinese New Year.

I asked Grok this morning for an update and double checked it with Gemini.

Hereโ€™s what Grok reported:

Chinese AI is in a very active phase right now (mid-February 2026), with a massive wave of new model releases, upgrades, and applications timed around the Lunar New Year (Spring Festival) holiday. This echoes the “DeepSeek shock” from exactly one year ago, when DeepSeek’s low-cost, high-performance models disrupted global expectations.

Major Recent Releases and Upgrades (mostly this month)

Chinese companies are shipping frontier-level models at a furious pace, often open-source or very low-cost, focusing on efficiency, reasoning, coding, multimodality, agents, and real-world applications.

  • Alibaba โ†’ Just launched Qwen 3.5 (with agentic features, multimodal inputs like text + photo + video, up to 2-hour video analysis). They also released RynnBrain (specialized for physical AI/robotics, helping robots understand and interact with the real world).
  • ByteDance (TikTok parent) โ†’ Released Doubao 2.0 (claims to match GPT-5.2 / Gemini 3 Pro level reasoning and multi-step tasks). Their video gen model Seedance 2.0 went viral for high-quality deepfakes and creative uses, sparking huge youth/creator interest.
  • Zhipu AI โ†’ Dropped GLM-5 (open weights, strong in coding, long tasks, agent capabilities; one of the highest open-weight intelligence scores).
  • MiniMax โ†’ M2.5 and M2.5 Lightning (near state-of-the-art at ~1/20th the cost of top Western models like Claude Opus; very strong on coding/agent benchmarks).
  • Kuaishou โ†’ Kling 3.0 (advanced AI video generation).
  • Moonshot AI โ†’ Kimi K2.5 (very close to top proprietary models at 1/7th the price; excellent reasoning).
  • DeepSeek โ†’ Widely expected to drop V4 soon (focused on coding, potentially 1M+ context, efficient MoE architecture). They’ve been the pace-setter since early 2025.

Many of these are open-weight or open-source, leading to Alibaba’s Qwen family overtaking Meta’s Llama in cumulative Hugging Face downloads.

Broader Themes Right Now

  • Low-cost + open ecosystems โ†’ China is winning on price/performance and rapid iteration. Models often cost 5โ€“20ร— less than Western equivalents while matching or nearing performance in many areas.
  • Two different races โ†’ Analysts say the US focuses on perfecting frontier models (bigger, more compute-heavy), while China emphasizes adoption โ€” deploying cheap, efficient AI at massive scale in manufacturing, robotics, consumer apps, and the real economy.
  • Physical & embodied AI โ†’ Huge push here. Robotics demos (backflips, kung fu with nunchucks synced across many bots on live TV) show fast progress in “physical intelligence.” Alibaba’s RynnBrain targets robotics directly.
  • Other notable advances โ†’ In-orbit AI computing constellation (“Three-Body”) successfully tested (running LLMs on satellites for low-latency edge processing). Military applications (e.g., bio-inspired drone swarms using hawk/wolf behaviors) are advancing rapidly.
  • Challenges & realism โ†’ Some top Chinese AI leaders (e.g., Alibaba’s Qwen lead) estimate <20% chance of fully overtaking US frontier models in 3โ€“5 years due to compute gaps and chip sanctions. But open collaboration and talent pipelines are closing the gap fast in many practical areas.

Overall, February 2026 feels like China’s AI industry hitting a coordinated sprint โ€” not just catching up, but pulling ahead in accessible, deployable, cost-effective AI that reaches everyday users, creators, and industries quickly. The US still leads in raw frontier capability and monetization scale, but the gap is shrinking noticeably in deployment and iteration speed.


Geminiโ€™s review included the following:

The reportโ€™s observation about the “Two Different Races” is the most insightful part:

โ€ข US Strategy: Scaling laws for maximum “Frontier Intelligence.”

โ€ข China Strategy: Scaling for “Deployment Efficiency.” By making models like MiniMax M2.5 and Kimi K2.5 available at a fraction of the cost ($0.15โ€“$0.60 per 1M tokens), China is prioritizing the “AI Agent” economy, where reliability and low cost per task matter more than raw benchmark scores.

Categories
AI AI: Prompting Books Writing

How to Write a Book (The Voice-First AI Workflow)

Weโ€™ve all seen the “How to write a book in 24 hours with AI” tutorials. Usually, they involve a single prompt and result in a 200-page book of absolute “slop”โ€”generic, robotic, and devoid of any human soul.

I recently came across a workflow on X that flips the script. Instead of asking AI to write for you, it uses AI as an editor, librarian, and investigative journalist to extract the book that’s already in your head.

Iโ€™ve adjusted this process a bit (with Gemini’s help) and built a revised “Toolkit” that makes it even more rigorous. This is a system you can use to write a book that actually sounds like you.

The Philosophy: Knowledge Extraction > Generation

The biggest mistake people make with AI is using it to generate ideas. The best books come from extracted knowledge. You already have the expertise in your head; the hard part is the “manual labor” of organizing, structuring, and filling gaps. We often overlook stuff that we should have included or we can’t quite resolve how stuff should be sequenced for the most reader impact.

This workflow uses voice dictation as the first step to capture your raw energy and then adds in AI to help handle the structural heavy lifting.

The 7-Step “Enhanced” Workflow

1. The Walking Braindump

Go for a walk and record a voice memo of everything you want to say about your topic. Don’t worry about structure, grammar, or “the right way” to say it. Just talk. Don’t worry about it. Movement usually activates parts of the brain that sitting at a desk shuts down. You’re just using the outdoors to open up your mind and bring fresh perspective.

2. The Transcription & “Essence” Extraction

Use a tool to convert your voice memo into text (something like the built-in tools in the iOS Voice Memos app on your iPhone or, if you prefer, a third-party app like MacWhisper or Wispr Flow. I find the built-in tools work quite well these days so give them a try before spending the money for a third-party app.

Once you have the transcript, don’t ask the AI to “write a chapter.” Instead, you want to ask it for help identifying the Core Thesis, the Target Audience, and the Unique Vocabulary you used. This creates a “North Star” for your project.

3. The Voice Mirror

This step is the secret sauce. To prevent the AI from making you sound like a corporate brochure, provide it with 3-5 pages of your best previous writing. Ask it to analyze your rhythm, sentence length, and diction. Tell it: “Clean up my transcripts, but keep my fingerprint.”

4. Recursive Brainstorming

Work on the outline with the AI. Iterate. Ask it what’s missing. How does it compare to the other best-selling books in your niche. Where are you being redundant? Where is your “original” angle?

5. Filling the Gaps via “Socratic Interview”

Once you have an outline, you’ll find “thin” areas. Instead of typing, flip things around and ask the AI to play the role of an investigative journalist. Have it interview you about the missing pieces. Dictate your answers. As with the first step, this pulls deep insights out of you that you might never have thought to type. Think like you’re talking to a friend and reacting to their reactions.

6. The “Devilโ€™s Advocate” Pass

Before you get too far, ask the AI to “Red Team” your outline. Ask: “What would a skeptic say is the biggest leap of faith in my argument?” This forces you to add evidence where you were previously relying on “trust me.”

7. The Manual Polish

Finally, sit down and type. As the original author of this workflow noted, typing activates a different, more “painful” part of the brain that is essential for final quality. Use the AI-organized markdown files as your foundation, but always do the final “human” pass yourself.

Your “Project DNA”

If you’re going to try this, the key is consistency. Keep a file called Project_DNA.md. Every time you start a new session with AI, paste this file in first. It tells the AI exactly who you are, who you’re writing for, and what your voice sounds like.

Here’s the full toolkit with prompts that I’ve just described:

The AI Book Writing Toolkit: The Prompts

To help you get started, here are the exact prompts I use for each phase of this workflow.

Phase 1: The Essence Extractor

Use this on the transcript you’ve created after your first major walking braindump.

“I am providing a transcript of a raw ‘braindump’ for a book I am writing. Do not attempt to write the book yet. Instead, perform a deep thematic analysis to extract the ‘Essence’ of this project. Identify: The Core Thesis, The Target Audience, Unique Vocabulary (metaphors/phrases I use), and the Tone Map.”

Phase 2: The Voice Mirroring Protocol

Use this before organizing transcripts to ensure you don’t sound like a robot.

“Act as my Editor and Voice Stylist. I am providing 3โ€“5 pages of what I consider to be my best writing. Analyze this writing for sentence rhythm, diction, and structure. Whenever you help me organize my voice transcripts, you must apply these stylistic ‘fingerprints.’ Clean up the grammar, but keep my rhythm.”

Phase 3: The Socratic Interviewer

Use this when a chapter feels thin or you hit a wall.

“Act as an investigative journalist interviewing me. Identify 3 areas where my current draft is fuzzy or surface-level. Ask me ONE question at a time. After I answer, challenge me: ‘That makes sense, but what about [Counter-argument]?’ Summarize our findings afterward.”

Phase 4: The Devil’s Advocate

Use this to stress-test your draft.

“I want you to ‘Red Team’ this rough copy of my book. Imagine you are a critical reviewer. Point out the ‘leaps of faith’ I am asking the reader to make. Identify where my arguments are ‘low-hanging fruit’ and tell me specifically where I am being too similar to existing books in this niche. Help me make it better.”

Final Thought

Your goal isn’t to use AI to work less; it’s to have it help you work deeper. By offloading the “clerk work” of organization and editing to AI, you free up your brain to do the actual “author work” of thinking, connecting, and writing.

Are you working on a book? I’d love to hear how you’re using (or avoiding) AI in your process. What prompts are you using? How have you adjusted them to fit your needs?

Categories
Health medical

Looking Back: Reflections on “My Life in a Sling”

It is hard to believe it has been over 15 years since I first shared the story of my rotator cuff surgery in a post titled My Life in a Sling!.

What started as a simple update about a “nasty fall in the dark” ended up becoming one of the most-read and most-commented-on pieces Iโ€™ve ever published. To date, that post has over 1,200 comments from people all over the world sharing their own recovery journeys. Looking back at that experience today, a few things still stand out.

The Original Recovery “Toolkit”

In 2009, I shared the specific things that got me through those first few weeks. Even years later, when friends ask me about shoulder surgery, these are the points I find myself repeating:

โ€ข The Ice Machine: It was the unsung hero of the first 72 hours. While it felt strange not to “feel” the cold through the bandages, it was the primary reason my pain stayed manageable.

โ€ข The “Elephant Trunk” Exercise: Those early passive therapy movements were crucial for preventing the joint from freezing up.

โ€ข The Sleep Puzzle: Navigating the challenge of being a side-sleeper forced to sleep in a recliner or on the “wrong” side remains one of the most vivid memories of that time.

The Power of the Community

The most surprising part of that post wasn’t the surgery itselfโ€”it was the 1,217 replies (and counting). It became a digital waiting room where people shared their fears about the nerve block wearing off, practical tips on how to shower with one hand, and encouragement for those who felt like their recovery was taking too long.

Reading through those comments over the years has been a masterclass in human resilience and the value of shared experience. It turned a personal health update into a community resource.

The Importance of Capturing the Journey

I originally added a note to that post about the value of personal journaling. Looking back now, Iโ€™m so glad I documented the specifics of the recovery. Memory fades, and it is easy to forget the small milestonesโ€”like the day you can finally drive again or the first night you get a full eight hours of sleep.

If you are facing a similar health challenge today, I still stand by my original advice: Write it down. Not just for your future self, but because your story might be exactly what someone else needs to hear while they are sitting in their own “sling phase.”

For those who found that original post years agoโ€”how are your shoulders doing today? And for those just starting the journey: what are you most nervous about?

Categories
AI New York City San Francisco/California Work

The Paradox of the Pulse

The skyline has always been a silhouette of our collective ambition. For a century, the steel and glass towers of our major cities functioned as the secular cathedrals of the modern age. But as Andrew Yang observes in his reflection on the shifting urban landscape, the pews are emptying. The “doom loop”โ€”a self-reinforcing cycle of vacant offices, declining tax revenue, and diminishing servicesโ€”is a mathematical ghost haunting our city planners.

Yet, if you walk the streets of Manhattan today, the sidewalks are often busier than ever. In San Francisco, the “Cerebral Valley” AI boom is sparking a gold rush of intellect that rivals the original tech explosion. We are witnessing a strange paradox: the Death of the Office occurring simultaneously with a Rebirth of the Urban Pulse.

The crisis Yang describes is real, but it may be a crisis of form rather than function. We tolerated the friction of urban life for the sake of career “flow.” Now that the flow is digital, the city is being forced to justify its existence through something more primal: energy.

“We are looking at a fundamental restructuring of the American city. The office was the sun around which everything else revolved. Now, that sun is dimming.”

The AI boom isn’t happening over Zoom; itโ€™s happening in “hacker houses” and shared spaces where the speed of a conversation over coffee outpaces a fiber-optic connection. This suggests that the “doom loop” might only apply to the traditional, sterile corporate cubicle. The city is shedding its skin. It is moving away from being a place where we must be, toward a place where we want to be.

Yangโ€™s warning serves as a necessary guardrail. We cannot ignore the fiscal cliff of empty high-rises. However, the vibrancy of NYC and the reinvigoration of SF suggest that the city isn’t dyingโ€”it’s just no longer a captive audience. We are standing in the ruins of an old habit, watching a new, more intentional way of living together take root in the cracks.


Five Questions to Ponder

  • The Pull of Proximity: If we no longer have to be in the city for a paycheck, what is the specific “energy” that keeps you coming back to the sidewalk?
  • The AI Renaissance: Is the AI boom in SF proof that high-innovation industries require physical density, or is it just the last gasp of the old model?
  • Form vs. Function: If a skyscraper can no longer be an office, what is the most radical thing it could become to serve a “busy” city?
  • The Captive Audience: For decades, cities were built for people who had to be there. How does a city change when it has to “woo” its citizens every single day?
  • Digital Nomads vs. Urban Anchors: Are we moving toward a world of “temporary density,” where cities are vibrant hubs for projects but no longer long-term homes?
Categories
SpaceX

A Starlink Light Show?

Earlier this morning Elon Musk commented that itโ€™d be cool to have a โ€œStarlink light showโ€ for Americaโ€™s 250th birthday celebration later this year.

Intrigued, I asked Grok how this might be possible? Hereโ€™s what Grok replied:

A Starlink light show, as casually suggested by Elon Musk in a recent reply on X (where a friend proposed doing one “one of these days” because it “would look cool”), would be a coordinated visual spectacle using the thousands of Starlink satellites already in low Earth orbit. Here’s how it could realistically work, based on current satellite tech, physics, and what’s been observed/discussed.

Current “Natural” Light Shows from Starlink

Starlink satellites already create impressive visual effects unintentionally:

  • Reflected sunlight โ€” They don’t have onboard lights; what people see as bright moving dots or “trains” is sunlight glinting off their flat solar panels and metallic surfaces. Right after launch, when satellites are still clustered and low in orbit, they form a striking linear “train” of pearls across the sky (often mistaken for UFOs). Over time, as they spread out, individual ones can flare brightly when their panels catch the sun at just the right angle toward observers on the ground.
  • Flares and flashes โ€” Astronauts like Don Pettit have captured time-exposure photos showing dozens of Starlink satellites flashing like shooting stars near orbital dawn/dusk, when sunlight hits them optimally while the ground below is dark.

These are passive reflections, not controlled emissions.

How a Deliberate, Controlled Light Show Could Be Engineered

To turn this into something intentional and artistic (e.g., patterns, shapes, messages, or synced effects for an event like the US 250th birthday in 2026), SpaceX would need to add active control. Several feasible methods exist or could be adapted:

  1. Orientation/Attitude Control for Directed Reflections
    Starlink satellites already have precise thrusters and reaction wheels to adjust orientation for communication and orbit maintenance. By tilting solar panels or the satellite body in sync, they could “aim” reflections toward specific ground areas at precise times.
  • Coordinated across many satellites, this could create moving patterns, shapes, or even simple images (like drawing with points of light).
  • Limitations: It only works when the sun is illuminating the satellites (twilight hours), and visibility depends on clear skies and viewer location. Not truly “on-demand” at midnight.
  1. Onboard LEDs or Controllable Lights
    Adding low-power LED arrays (visible from Earth) is technically possible, though current Starlink v satellites don’t have them. Power draw would be minimal compared to their solar/battery setup, and they’d only need to activate briefly for a show.
  • This would allow night-time visibility without relying on sunlight.
  • Precedents exist in other satellites (e.g., some experimental ones with flashing beacons), but it’d require new hardware on future generations.
  1. Using Existing Inter-Satellite Lasers (the “Space Lasers”)
    Starlink satellites are equipped with optical inter-satellite links (lasers) for high-speed data transfer between satellites (up to 200 Gbps per link, with thousands active). These are infrared lasers, invisible to the human eye.
  • Not directly usable for a visible show โ€” They’re not pointed at Earth for this purpose, and IR isn’t visible.
  • A fun speculative twist: Future upgrades could include visible-wavelength lasers or diffusers, but that’s not current capability. (Some discussions joke about “space lasers” for displays, but it’s mostly the reflection method that’s practical.)
  1. Long-Exposure or Camera-Based Effects
    One creative workaround mentioned in discussions: Have people point cameras/phone apps upward for a set time while satellites flash reflections in patterns. Combined with apps syncing exposure, it could form large-scale images (e.g., flags, logos) visible only in photos/videos, not to the naked eye. This scales with the constellation’s size (over 7,000+ satellites now, heading toward tens of thousands).

Practical Challenges and Realism

  • Scale needed โ€” At any moment, only ~50โ€“100 Starlink satellites might be visible overhead in one spot (due to orbital geometry). A continent-wide show would require precise timing across the fleet and ideal viewing conditions.
  • Light pollution concerns โ€” Astronomers already criticize Starlink for unintended brightness; a deliberate show could amplify backlash unless mitigated (e.g., SpaceX has added anti-reflective coatings to newer versions).
  • Feasibility today vs. future โ€” Musk’s comment was lighthearted (“one of these days”), likely nodding to future versions with more control. Current satellites could do basic coordinated flares/reflections with software updates, but a full “spectacular” show might need hardware tweaks on next-gen birds.

In short, the simplest and most likely version would leverage sunlight reflections with synchronized satellite tiltsโ€”turning the night sky into a moving canvas of twinkling points. It’d be epic for a patriotic event, but more “string of controlled flares” than a laser-light extravaganza like a fireworks show. If SpaceX ever does it, expect announcements and tracking apps to tell you exactly when/where to look!


Quite fascinating!

Categories
AI IBM

From Picnic to Workforce: The New Scaling

In 1977, Charles and Ray Eames released a short film for IBM called Powers of Ten.

The film opens with a couple picnicking on a blanket in Chicago and zooms outโ€”every ten seconds, the field of view increases by a factor of ten.

We move from the intimacy of a lakeside lunch to the edge of the observable universe, then plunge back down through the skin of a hand into the subatomic architecture of a carbon atom.

The subtitle was “A Film Dealing with the Relative Size of Things and the Effect of Adding a Zero.”

It was a meditation on scale, suggesting that as we add zeros to our perspective, the very nature of what we are looking at transforms.

Today, with AI, we are living through a new kind of “Powers of Ten” journey, but the zeros aren’t being added to meters; they are being added to tokens.

I recently read a reflection by Azeem Azhar where he chronicled his shift from using 1,000 AI tokens a day to nearly 100 million. In the Eamesโ€™ film, adding a zero moved you from a park bench to a city, then to a continent. In the world of Large Language Models, adding a zero moves the AI from a novelty to a tool, then to a collaborator, and eventuallyโ€”at the scale of 100 millionโ€”to something resembling a “workforce.”

“At 100,000 [tokens], a collaborator. At 1 million, I was building workflows. At 10 million, processes. At nearly 100 million โ€“ something closer to a workforce.”

This shift is more than just “more of the same.” It is a phase change.

When the Eames’ camera zoomed out to $10^{24}$ meters, the Earth didnโ€™t just look smaller; it disappeared into a texture of galaxies.

When we scale our interaction with intelligence by several orders of magnitude, the “picnic” of human cognitionโ€”the way we think, draft, and createโ€”is no longer the center of the frame.

At the 100-million-token-day scale, we aren’t just “using” AI. We are orchestrating vast, invisible ecosystems of thought. We are seeing companies like Spotify where top developers reportedly haven’t written a line of code in months, instead directing systems that ship features while the humans review the output from their phones.

We have added so many zeros that the “relative size” of human effort has changed.

The chilling yet beautiful thing about Powers of Ten was the realization of our own insignificance in the face of the cosmos, balanced by the intricate complexity found within our own cells.

As we zoom out into the “Token-Verse,” we face a similar existential pivot. If an AI can process a hundred million tokens of “thought” in a dayโ€”a volume no human could read in a lifetimeโ€”what does it mean to be the “author” of our lives?

The answer, I suspect, lies back on the picnic blanket.

The Eameses knew that while the scale of the universe is staggering, the meaning is found in the connection between the two people on the grass.

As we add zeros to our digital capabilities, our value shifts from the production of tokens to the intention behind them.

We are no longer the builders of the cathedral; we are the ones deciding why the cathedral needs to exist at all.

We are moving from the era of the โ€œWorkerโ€ to the era of the โ€œArchitectโ€ or maybe just the โ€œWitness.โ€

Categories
Computers FORTH IBM Programming

The Architecture of the Stack

Back in the early 1980โ€™s when I worked for IBM, I was able to acquire my own IBM PC and experience my own form digital frontierism. Today I really wish I had a logbook at hand with a record of everything I did as my ability to recall those details has faded with age. A couple of those memories that still do remain with me involve two obscure languages: APL and FORTH. And then there was Borland Turbo Pascal.

In those early days of the 1980โ€™s, memory wasn’t an infinite field; it was a precious, finite resource. While most of us were content living with the structured guardrails of BASIC, there was a subset of us drawn to the elegant, stripped-back world of FORTH.

Learning FORTH felt less like coding and more like learning a new way to breathe. It was lean. It was efficient. It stripped away the overhead of high-level syntax until it was just you, the dictionary, and the stack. There was an honesty to itโ€”no hidden abstractions, just a direct conversation with the hardware.

Then, of course, there was the hurdle of Reverse Polish Notation (RPN). Grokking the stack meant rewiring your brain. You couldn’t just state an operation; you had to prepare the world for it first. You pushed your data onto the stack, one piece at a time, and only then did you call the action. It was a rhythmic, almost percussive way of thinking: Input, input, act.

“In FORTH, you don’t just write programs; you build a language to solve the problem.”

This “bottom-up” philosophy changed the relationship between the creator and the machine. You weren’t just a user; you were an architect of your own vocabulary. To define a new “word” in FORTH was to permanently expand the capabilities of your environment. It was a recursive journey where every small success became a building block for the next complexity.

Looking back, those days with the IBM PC and the stack weren’t just about efficiency. They were about the discipline of clarity. When resources are limited, your thinking must be precise. The difficulty of RPN wasn’t a bugโ€”it was a feature that forced you to understand the flow of data at its most fundamental level.

Categories
AI Programming Prompt Engineering Software Work

The Great Inversion

For twenty years, the “Developer Experience” was a war against distraction. We treated the engineerโ€™s focus like a fragile glass sculpture. The goal was simple: maximize the number of minutes a human spent with their fingers on a keyboard.

But as Michael Bloch (@michaelxbloch) recently pointed out, that playbook is officially obsolete.

Bloch shared a story of a startup that reached a breaking point. With the introduction of Claude Code, their old way of working broke. They realized that when the machine can write code faster than a human can think it, the bottleneck is no longer “typing speed.” The bottleneck is clarity of intent.

They called a war room and emerged with a radical new rule: No coding before 10 AM.

From Peer Programming to Peer Prompting

In the old world, this would be heresy. In the new world, it is the only way to survive. The morning is for what Bloch describes as the “Peer Prompt.” Engineers sit together, not to debug, but to define the objective function.

“Agents, not engineers, now do the work. Engineers make sure the agents can do the work well.” โ€” Michael Bloch

Agent-First Engineering Playbook

What Bloch witnessed is the clearest version of the future of engineering. Here is the core of that “Agent-First” philosophy:

  • Agents Are the Primary User: Every system and naming convention is designed for an AI agent as the primary consumer.
  • Code is Context: We optimize for agent comprehensibility. Code itself is the documentation.
  • Data is the Interface: Clean data artifacts allow agents to compose systems without being told how.
  • Maximize Utilization: The most expensive thing in the system is an agent sitting idle while it waits for a human.

Spec the Outcome, Not the Process

When you shift to an agent-led workflow, you stop writing implementation plans and start writing objective functions.

“Review the output, not the code. Don’t read every line an agent writes. Test code against the objective. If it passes, ship it.” โ€” Michael Bloch

The Six-Month Horizon

Six months from now, there will be two kinds of engineering teams: ones that rebuilt how they work from first principles, and ones still trying to make agents fit into their old playbook.

If you haven’t had your version of the Michael Bloch “war room” yet, have the meeting. Throw out the playbook. Write the new one.