Categories
AI

The Ghost in the Spreadsheet

There is a specific kind of quiet that descends when a tool finally disappears into the task. We saw it with the cloudโ€”once a radical, debated concept of “someone elseโ€™s computer,” now merely the invisible oxygen of the internet. We saw it with Uber, moving from the existential dread of entering a strangerโ€™s car to the thoughtless tap of a screen.

In a recent reflection, Om Malik captures this shift happening again, this time with the loud, often overbearing presence of Artificial Intelligence. For years, we have treated AI like a digital parlor trick or a demanding new guest that requires “prompt engineering” and constant supervision. But as Om notes, the real revolution isn’t found in the chatbots; itโ€™s found in the spreadsheet.

“I wasnโ€™t spending my time crafting elaborate prompts. I was just working. The intelligence was just hovering to help me. Right there, inside the workflow, simply augmenting what I was doing.”

This is the transition from “Frontier AI” to “Embedded Intelligence.” It is the moment technology stops being a destination and starts being a lens. When Om describes using Claude within Excel to model his spending, he isn’t “using AI”โ€”he is just “doing his taxes,” only with a sharper set of eyes.

There is a profound humility in this shift. We are moving away from the “God-in-a-box” phase of AI and into the “Amanuensis” phase. It reminds me of the old craftsmanship of photography, another area Om touches upon. We used to carry a bag full of glass lenses to compensate for the limitations of light and distance. Now, a fixed lens and a bit of intelligent upscaling do the work. The “work” hasn’t changedโ€”the vision of the photographer remains the soul of the imageโ€”but the friction has evaporated.

However, as the friction disappears, a new, more haunting question emerges. If the “grunt work” was actually our training ground, what happens when we skip the practice?

“The grunt work was the training. If the grunt work goes away, how do young people learn? They were learning how everything workedโ€ฆ The reliance on automation makes people lose their instincts.”

This is the philosopher’s dilemma in the age of efficiency. When we no longer have to struggle with the cells of a spreadsheet or the blemishes in a darkroom, we save time, but we might lose the “feel” of the fabric. Purpose, after all, is often found in the doing, not just the result.

As AI becomes invisible, we must be careful not to become invisible along with it. The goal of augmented intelligence should not be to replace the human at the center, but to clear the debris so that the human can finally see the horizon. We are entering the era of the “invisible assistant,” and our challenge now is to ensure we still know how to lead.

Categories
AI Business SpaceX

Overcoming Limiting Factors: Orbital Data Centers & The Optimus Era

One of my favorite persons to follow on X is @pbeisel (Phil Beisel). Heโ€™s quite active sharing his thoughts about many of the same topics Iโ€™m interested in: technology, AI, robotics, computing, etc. Phil’s written a series of great articles about Tesla Full Self Driving, Optimus, etc. that are well worth spending time with.

On Saturdays, he get together on YouTube with Randy Kirk and they talk about whatโ€™s interesting from the last week – often thatโ€™s got something to do with various aspects of the โ€œMusk-conomyโ€ – the various companies of Elon Musk.

This weekโ€™s edition reviews Philโ€™s distillation of the Cheeky Pint interview with Elon published earlier this week. As usual, Philโ€™s comments add additional insights into the topic.

When I begin viewing a long YouTube video, I also like an accompanying summary that I can follow along. YouTube now has the ability to generate these summaries but Iโ€™ve got a custom Gem prompt that I prefer to use instead which tailors the results a bit more to my liking.

Below, for example, is the summary of this weekโ€™s conversation between Phil and Randy that was generated by Gemini Pro 3:

Executive Summary: The Musk “Musconomy” Convergence

The central thesis of the discussion is that Elon Musk is moving toward a total vertical integration of his companies (Tesla, SpaceX, and xAI) to overcome terrestrial “limiting factors” and dominate both the physical and digital manifestation of AI.


1. The “Limiting Factor” Philosophy [11:20]

  • Problem-Solving Framework: Musk focuses personal time and resources strictly on the “limiting factor” of any given projectโ€”currently identified as compute power and energy.
  • Vertical Integration: To bypass supply chain bottlenecks (e.g., turbine blades for power plants), Musk is opting to manufacture raw materials and components in-house rather than relying on external catalogs [18:18].

2. Orbital Data Centers: The Space “Escape Hatch” [24:19]

  • Energy Constraints: Terrestrial data centers are hitting a wall due to slow public utilities and permitting [15:26].
  • The Vision: Moving inference-based data centers to orbit using a constellation of satellites connected by optical laser links.
  • Economic Viability: Musk projects economic viability for space-based data centers within 30โ€“36 months, with reusability of the Starship being the primary hurdle [25:03].
  • Strategic Advantage: Unlike Google or Meta, Musk owns the “kilogram-to-space” delivery mechanism, potentially forcing competitors to rent capacity from SpaceX [32:19].

3. Optimus and the “Abundance Engine” [39:00]

  • Physical Dexterity: Musk is prioritizing high-dexterity actuators designed in-house to achieve human-level utility [40:30].
  • Training Scale: Tesla is moving toward training Optimus in “gymnasiums” using 10,000โ€“30,000 bots working 24/7 to develop “composable” skills (basic movements) and “decomposable” skills (complex tasks) [55:13].
  • Impact: Optimus is viewed as a paradigm-shifting product that will redefine global GDP by decoupling labor from human constraints [54:56].

4. xAI: The Digital Control Plane [56:19]

  • The “Brain” Portability: xAI is viewed as the “orchestration AI” for the entire fleet of Muskโ€™s physical assets (Starships, Teslas, and Optimus) [59:01].
  • Unified Interface: The vision includes a seamless “digital personality” or movable brain that follows the user from their phone to their car to their home robot [01:00:15].

Key Projections & Timelines

Objective Target/Detail Timestamp SpaceX IPO Likely to happen before a Tesla merger to attract cheap capital [03:31] Solar Scaling Aiming for a 300x increase (100 gigawatts/year) [22:21] Starship Reusability remains the “unlock” for space-based AI economics [25:51]

Conclusion: The “Musconomy” is transitioning from separate ventures into a singular entity where SpaceX provides infrastructure, Tesla provides the physical bodies, and xAI provides the intelligence.

Categories
Business

The Geometry of Focus: Finding the Limiting Factor

In the modern landscape of high-stakes management, there is a recurring temptation to solve everything at once. We are taught to optimize across the boardโ€”to improve efficiency by 2% here, 5% thereโ€”until the entire machine hums. But in a recent conversation with John Collison and Dwarkesh Patel, Elon Musk repeatedly returned to a single, almost obsessive mantra: the “limiting factor.”

It is a deceptively simple phrase. It suggests that at any given moment, there is one specific bottleneck that dictates the speed of the entire enterprise. If you aren’t working on that, you aren’t really moving the needle. You are merely polishing stuff.

“I think people are going to have real trouble turning on like the chip output will exceed the ability to turn chips onโ€ฆ the current limiting factor that I seeโ€ฆ in the one-year time frame itโ€™s energy power production.”

Muskโ€™s management technique is not about broad oversight; it is about a radical, almost violent prioritization. He looks at the timelineโ€”one year, three years, ten yearsโ€”and asks: What is the wall we are about to hit? Right now, it might be the availability of GPUs. In twelve months, it might be the physical gigawatts of electricity required to plug them in. In thirty-six months, it might be the thermal constraints of Earthโ€™s atmosphere, necessitating a move to space.

This approach requires a high “pain threshold.” To solve a limiting factor, you often have to lean into acute, short-term struggle to avoid the chronic, slow death of stagnation. John Collison noted this during the interview:

“Most people are willing to endure any amount of chronic pain to avoid acute painโ€ฆ it feels like a lot of the cases we’re talking about are just leaning into the acute painโ€ฆ to actually solve the bottleneck.”

For many leaders, the “limiting factor” is often something they aren’t even looking at because it lies outside their perceived domain. A software CEO might think their limit is talent, when itโ€™s actually the speed of their internal decision-making. A manufacturer might think itโ€™s raw materials, when itโ€™s actually the morale of the factory floor.

To manage by the limiting factor is to admit that 90% of what you could be doing is a distraction. It is a philosophy of subtraction and focus. It demands that we stop asking “What can we improve?” and start asking “What is stopping us from being ten times larger?” Once you identify that wall, you throw every resource you have at it until it crumbles. And thenโ€”and this is the part that requires true staminaโ€”you immediately go looking for the next wall.

By focusing on the one thing that matters, we stop being busy and start being effective. We stop managing the status quo and start engineering what may feel like the impossible.

Categories
AI Work

The Rungs We Leave Behind

โ€œCompanies, too, must prepare. To thrive they need not only to make the best use of ai, but also to find and nurture the best people to work with it. Some back-office workers will lose their jobs. But others with tacit knowledge of the business may be trained for new roles. The biggest mistake would be to stop hiring young people altogether. That would not only choke off the pipeline for future talent, it would rob businesses of AI natives. Instead, companies should rethink the type of work they offer young peopleโ€”less grunt labour, more judgment and analysis; speedier rotations across the business so they gain insight that ai cannot have; piloting new roles and trying new approaches.โ€
โ€” The Economist

There is a specific kind of quiet panic in boardrooms today. It isn’t just about the bottom line; itโ€™s about the lineage of knowledge. For decades, the “entry-level” role served a hidden purpose. It wasn’t just about getting the spreadsheets done; it was about osmosis. By doing the “grunt labor,” a young professional absorbed the culture, the politics, and the subtle, unwritten rhythms of an industryโ€”what we call “tacit knowledge.”

We often view AI as a replacement for the “boring stuff,” but we forget that the boring stuff was the soil in which expertise grew. If we remove the bottom rungs of the ladder because a machine can climb them faster, how do we expect anyone to reach the top?

The shift from “labor” to “judgment” is a profound psychological leap. We are essentially asking 22-year-olds to skip the apprenticeship of execution and move straight into the apprenticeship of discernment. This requires a radical empathy from leadership. We cannot simply hand a junior employee a powerful AI tool and expect them to know what “good” looks like if theyโ€™ve never seen “bad” up close.

The “AI native” brings a fluidity with technology that my generation might never fully replicate, but they lack the scars of experience that inform intuition. To thrive, companies must become teaching hospitals rather than just production factories. We need to create “judgment-rich” roles where young people are encouraged to experiment, to fail safely, and to rotate through the business at a pace that keeps them ahead of the automation curve.

The disruption is here. It is unavoidable. But there is a soulful middle ground: using AI to strip away the drudgery while doubling down on the human mentorship that transforms a “worker” into a “leader.” The goal isn’t just to make the best use of AI; itโ€™s to ensure that when the AI provides an answer, there is still a human in the room with the soul and the context to know if that answer is right.

Categories
AI AI: Large Language Models

Kimi K2.5

I tried a few conversations with Kimi K2.5 last night and it is pretty impressive. It remains to be seen how the competition between models – especially open weight / open source models – will end up! Rumors are swirling that DeepSeek may release an updated model next month. Weโ€™ll see!

I asked Grok for the consensus โ€œbuzzโ€ on the release of Kimi K2.5. Hereโ€™s what Grok reported this morning:

The consensus buzz around Kimi 2.5 (more precisely Kimi K2.5, from Moonshot AI) is extremely positive, especially in the open-source AI community, on platforms like Reddit (r/LocalLLaMA, r/singularity), X, Hugging Face, and tech news outlets. It was released very recently (around January 27, 2026), so the excitement is fresh and intense.

It’s widely hailed as one of the most powerful open-source models available right nowโ€”a native multimodal (text + vision, including strong video understanding and “visual coding”) Mixture-of-Experts model with ~1 trillion total parameters (32B active), 256K context, native INT4 quantization for efficiency, and standout agentic capabilities.

Key highlights driving the hype:

  • Benchmark dominance โ€” It outperforms or beats models like GPT-5.2, Claude 4.5 Opus/Sonnet, Gemini variants, and other frontier models in several areas, including:
  • Humanity’s Last Exam (HLE): ~50-51.8% with tools (strong agentic reasoning)
  • SWE-Bench Verified (coding): Top scores, often leading in visual-to-code and software engineering tasks
  • Multimodal benchmarks (MMMU Pro, VideoMMMU, OCR-heavy tasks): State-of-the-art or near-SOTA without fine-tuning
  • Math/reasoning (AIME, GPQA-Diamond, etc.): Frequently matches or exceeds closed models
  • Standout features โ€” “Agent Swarm” (beta) lets it orchestrate up to 100 sub-agents in parallel for complex tasks, handling 1,500+ tool calls and reportedly cutting execution time by up to 4.5x on long-horizon workflows. This is seen as a big step forward in agentic AI. Visual coding is a killer demo: upload a video/screen recording of a website/interface, and it clones it (including UX interactions) in one shotโ€”people are calling this transformative for “vibe coding” and rapid prototyping.
  • Open-source advantage โ€” Weights on Hugging Face under a Modified MIT license (permissive for most uses). It’s accessible via APIs (OpenAI-compatible) on platforms like Moonshot’s site, Together AI, NVIDIA NIM, and othersโ€”often at lower cost than Western closed models. Many see this as China continuing to close the gap (or pull ahead in open releases).

Community sentiment:

  • Enthusiastic praise for being “the strongest open-source model” in multimodal/agentic domains, with comments like “huge leap,” “crushing benchmarks,” and “changing the game” for coding, visual analysis, and tool-augmented work.
  • Some skepticism exists (as with any new model): questions about real-world vs. benchmark performance, potential optimization for evals, or occasional long-context quirks in earlier Kimi versions. A few users report mixed results in specific tools (e.g., GitHub Copilot integrations), but overall feedback leans heavily positive, especially for reasoning, creative tasks, and agentic flows.
  • It’s fueling discussions about open-source momentum, with parallels drawn to DeepSeek, Qwen, and othersโ€”positioning Moonshot as a top Chinese AI player.

In short, the vibe is “this is a massive win for open AI” and “watch out closed models”โ€”with developers rushing to test it for coding agents, visual-to-code projects, and multi-agent setups. If you’re into frontier open models, it’s the hot topic right now.

Categories
AI

The Second Fire: From Finding to Forming

There is a specific kind of vertigo that comes with a paradigm shift. Itโ€™s the feeling of standing on the edge of a map that has just been unrolled to reveal twice as much territory as you thought existed. Lately, as I navigate the vast, generative landscape of AI, that old vertigo has returned. Itโ€™s a hauntingly familiar resonance, a structural echo of the late nineties and early 2000s when we first encountered the Google search bar.

Back then, the world was a series of closed doors. Information was siloed in physical libraries, expensive encyclopedias, or the unreliable oral histories of our social circles. Then came that clean, white interface with a single blinking cursor. Suddenly, the friction of “not knowing” began to evaporate. We weren’t just browsing the web; we were suddenly endowed with a collective memory. It felt like a superpowerโ€”the ability to summon any fact from the digital ether in milliseconds.

“Google is not just a search engine; it is a way of life. It is the way we find out who we are, where we are going, and what we are doing.”

Today, the sensation is different in texture but identical in weight. If Google gave us the power to find, AI is giving us the power to form.

The “Aha!” moment of 2026 isn’t about locating a PDF or a Wikipedia entry; itโ€™s the realization that the distance between a thought and its realization has shrunk to almost nothing. When I prompt a model to synthesize a complex theory or visualize a dream, I feel that same electric jolt I felt twenty years ago when I realized Iโ€™d never have to wonder about a trivia fact ever again.

But there is a philosophical weight to this new “awesome.” With Google, the challenge was discernmentโ€”filtering the flood of information to find the truth. With AI, the challenge is intent. When the “how” becomes effortless, the “why” becomes the only thing that matters. We are moving from the era of the Librarian to the era of the Architect.

We are once again holding a new kind of fire. Itโ€™s warm, itโ€™s brilliant, and just like the first time we saw that search bar, we know that the world we lived in yesterday is gone, replaced by a version where our reach finally matches our imagination.

Categories
AI AI: Prompting Prompt Engineering

Using AI to Improve Prompts

This morning I came across a prompt that I’ve been finding provides me with some interesting results in understanding the financial results of a public company. I’ve used it several times on various companies and have found new insights in the results that I otherwise might not have realized. Here’s the prompt:

Analyze the last 3 earnings calls for [symbol].

Don't summarize what they said - tell me what they're NOT saying.

Focus on:

1) Questions the CEO dodged or gave vague answers to,
2) Metrics they stopped reporting compared to previous quarters,
3) Language changes - where they went from confident to cautious or vice versa,
4) New talking points that appeared suddenly,
5) Guidance changes and the exact wording they used to frame it. Then connect this to their stock performance in the 2 weeks following each call.

What pattern emerges? Explain clearly.

Is the stock a sell or a buy at its current price? How strong is your conviction?

One of the techniques that advanced AI users employ is to use AI to improve on their prompts. I wanted to see what Gemini 3 Pro could do to help me improve on this particular prompt – so I asked it:

You are an expert at prompt engineering. Here's a prompt that I find very useful but would like help to improve. What would you recommend to make it even better? Prompt: [insert prompt here]

Gemini responded and educated me:

This is a sophisticated request. You are asking for negative space analysisโ€”detecting the absence of signal, which is significantly harder for LLMs than summarizing present text.

The weakness of your current prompt is that it relies on the model’s general “feeling” about what was dodged. To make it “expert-level,” you need to force the model to perform comparative forensics between the Prepared Remarks (scripted/safe) and the Q&A (unscripted/risky), and between historical promises and current reality.

Here is the recommended upgrade to your prompt, followed by an explanation of why these changes work.

And the suggested prompt it provided when I asked it to optimize the prompt for $PYPL:

Role: Act as a Forensic Financial Analyst and Researcher. Task: Analyze the earnings call transcripts for $PYPL (PayPal) from Q1 2025, Q2 2025, and Q3 2025.

Core Directive: Do not summarize the "Bull Case." Your goal is to identify omission, deflection, and narrative drift.

Step 1: The "Dog That Didn't Bark" (Metric Analysis)

Compare the metrics reported in the Q1 2025 deck/opening remarks against Q3 2025.

List any specific metric (e.g., Net New Actives, TPV ex-eBay, OCF growth) that was prominent in earlier calls but vanished or was de-emphasized in the most recent call.

Output: A table: [Metric Name] | [Q1 Mention Count] | [Q3 Mention Count] | [Context of Disappearance].

Step 2: Semantic Divergence (Prepared vs. Q&A)

Analyze the tone difference between the CEO's Prepared Remarks (Scripted) and their Q&A Responses (Unscripted).

Identify specific instances where a confident claim in the script was walked back or qualified with "hedging" words ("essentially," "broadly," "we believe," "monitor") during the Q&A.

Step 3: The "Non-Answer" Detection

Review the Q&A section. Identify questions where the executive:

Pivoted to a different topic (e.g., asked about margins, answered about volume).

Used "word salad" (high jargon, low specific detail).

Claimed it was "too early to tell" for a product launched >6 months ago.

Quote the specific question and the specific non-answer.

Step 4: Stock Correlation

Retrieve the stock price performance for the 10 trading days following each call.

Correlate the severity of the "evasiveness" you found in Step 3 with the subsequent price action.

Final Verdict:

Based only on the negative signals (omissions and deflections), rate the stock as Accumulate, Neutral, or Distribute (Sell).

Provide a "Conviction Score" (1-10) based on the clarity of the executive team's transparency.

Gemini went on to explain the enhancements it made including the suggesting prompt for the Gem:

Why This is Better

  1. It defines the “Negative Space” (Step 1) LLMs hallucinate less when you ask them to count. By asking for a specific comparison of mention counts of metrics between Q1 and Q3, you force the model to mathematically prove that a topic has been dropped.
  2. It separates Script from Reality (Step 2) Executives are media-trained to sound confident in their opening speech. The “alpha” (the edge) is found when they get flustered in the Q&A. This prompt explicitly instructs the model to treat those two sections differently.
  3. It targets specific evasion tactics (Step 3) Instead of just asking for “dodged questions,” this prompt gives the model a taxonomy of evasion: Pivoting and Word Salad. This helps the model recognize that “We are excited about the long-term trajectory of Venmo” is actually a non-answer to “Why did Venmo revenue drop this quarter?”
  4. It forces “Conviction Scoring” Asking for a simple “Buy/Sell” often yields a hedged, safe answer from AI. Asking for a “Conviction Score” forces the model to weigh the evidence. If it finds many dodged questions, the conviction score for a “Sell” should naturally be higher.

This is just one example of using AI to enhance what’s possible using AI. I found this to be a very interesting and educational exercise and example!

Various AI’s natually respond differently to this prompt – and it’s interesting to compare the results for the same company. Give it a try!

After running the prompt one a couple of companies, I realized that this is a prompt I’d like to generalize and save away as a Gemini Gem so that I can easily run it whenever I want. So I asked Gemini:

I would like to generalize this and turn it into a gem that I can run on any company.

Gemini’s response:

This is a great move. By generalizing this, you are creating a “Forensic Short-Seller” agent that you can drop onto any earnings situation.

Below is the Master Prompt Template. I have swapped out the PayPal-specific metrics for variable placeholders and generalized the logic to work across industries (e.g., it will look for “Same Store Sales” in retail just as aggressively as “ARR” in software).

The “Negative Space” Forensic Prompt
Copy and paste the text below. All you need to do is replace the bracketed [VARIABLES].

Role: Act as a Forensic Financial Analyst and Short-Seller Researcher.

Context: I need you to perform a “Negative Space Analysis” on the last 3 earnings calls for [INSERT TICKER SYMBOL].

The Theory: Corporate executives are incentivized to highlight growth and obscure deterioration. The truth is often found in what they stop saying, or how they change their phrasing, rather than their headline numbers.

Your Goal: Do not summarize the “Bull Case.” Identify omission, deflection, and narrative drift.

Step 1: The “Vanishing Metric” Analysis

Scan the “Key Performance Indicators” (KPIs) reported in the oldest of the three calls.

Compare them to the most recent call.

Identify any specific metric (e.g., Customer Count, Retention Rate, Margins per Unit, backlog) that was prominent in earlier calls but was removed, redefined, or de-emphasized in the most recent call.

Output: A table: | Metric Name | Mention Count (Oldest Call) | Mention Count (Newest Call) | The Executive Excuse (if any given) |

Step 2: Semantic Divergence (Scripted vs. Unscripted)

Analyze the tone difference between the CEO/CFO Prepared Remarks (Scripted) and their Q&A Responses (Unscripted).

Identify instances where a confident claim in the script was walked back in the Q&A using “hedging” language (e.g., “headwinds,” “macro environment,” “transitory,” “conservative view”).

Flag any “Adjective Inflation”โ€”using strong adjectives (“robust,” “tremendous”) to describe mediocre data.

Step 3: The “Non-Answer” Detection

Review the Q&A section specifically. Identify questions where the executive:

Pivoted: Asked about A, answered about B.

Filibustered: Gave a long, jargon-heavy answer without specific numbers.

Deferred: Claimed it is “too early to tell” for a strategic initiative that should have data by now.

Format:

The Question: [Quote the Analyst’s hard question] The Evasion: [Quote the specific part of the answer that dodges] Translation: [Briefly explain what they were hiding]

Step 4: Sentiment vs. Price Action

Look at the stock price performance in the 10 days following each of the 3 calls.

Compare the CEO’s confidence level (High/Medium/Low) with the market’s reaction.

Pattern Recognition: Does the market consistently sell off despite “good news” (indicating management has lost credibility)?

Final Verdict:

Rating: Accumulate / Neutral / Distribute (Sell)

Conviction Score (1-10): (10 = The company is totally opaque/hiding bad news; 1 = The company is transparent).

The “Smoking Gun”: One sentence summarizing the single most concerning omission or change in narrative.

How to use this “Gem” effectively
To get the most out of this generalized prompt, keep these three tweaks in mind:

Define the “Golden KPI”: If you know the industry, add a hint in Step 1.

For SaaS: “Focus on Net Dollar Retention and ARR.”

For Retail: “Focus on Same Store Sales and Inventory levels.”

For Manufacturing: “Focus on Order Backlog and Book-to-Bill ratio.”

Provide the Data: Most LLMs cannot browse live transcripts behind paywalls perfectly. For best results, paste the text of the transcripts (or the Q&A sections) directly into the chat before running the prompt, or ensure you are using a model with active web browsing that can access financial news sites.

The “Competitor Check”: If you want to make it even sharper, add this line to Step 2: “Contrast their tone with [Competitor Ticker], who reported recently.”

Categories
AI Robotics

Breaking the Glass: When Intelligence enters the Physical World

For the last forty years, our relationship with digital intelligence has been trapped behind glass. From the beige box of the personal computer to the sleek slab of the iPhone, we have accessed information through a window. We stare at intelligence; it stares back, passive and disembodied. We ask it questions, and it flashes text on a screen. But it has no hands. It has no agency. It cannot pour a glass of water or comfort a child.

As Phil Beisel astutely notes, we are standing on the precipice of a profound phase shift:

“Optimus marks the moment intelligence leaves the screen and enters the physical world at scale.”

This isn’t just about a “better robot.” It is the convergence of three exponential curves crashing into one another: AI software capability, custom silicon efficiency, and electromechanical dexterity. When you multiply these factors, you don’t just get a machine; you get a new category of being. We are moving from “compressed book learning”โ€”the LLMs that can write poetry but can’t lift a pencilโ€”to embodied intelligence that understands physics, gravity, and fragility.

The Pluribus Moment

The philosophical implication of this transition is staggering. We are building a “Pluribus” entityโ€”a hive mind where individual learning becomes collective capability instantly.

In the human world, if I learn to play the violin, you do not. I must teach you, and you must struggle for years to master it. In the world of Optimus, if one unit learns to solder a circuit or perform a specific surgery, the entire fleet learns it overnight. The friction of skill transfer drops to zero.

The End of Scarcity

Elon Musk calls this the “infinite money glitch,” a sterile economic term for what is actually a humanitarian revolution: the decoupling of labor from human time. If the machine can replicate human movement and action 24/7, the cost of labor effectively trends toward zero. We often fear this as “replacement,” but looked at through a lens of abundance, it is the collapse of scarcity.

We are watching the birth of a world where the physical limitations that have defined the human conditionโ€”exhaustion, injury, the slow grind of mastering a craftโ€”are solved by a proxy that we built. Intelligence is no longer a ghost in the machine; it is the machine itself, walking among us, ready to work.

Categories
AI History Living

The Echo of the Roar

It is a strange sensation to look back exactly one century and see our own reflection staring back at us, sepia-toned but unmistakably familiar. We often think of the “Roaring Twenties” as a stylistic eraโ€”flapper dresses, Art Deco skyscrapers, and jazz. But beneath the aesthetic was a seismic technological shift that mirrors our current moment with an almost eerie precision.

In the 1920s, the world was shrinking. The radio was the “Great Disrupter” of the day. For the first time in human history, a voice could travel instantly from a studio in Pittsburgh to a farm in Nebraska. It was the democratization of information, a sudden collapse of distance that left society both thrilled and anxious.

“The radio brought the world into the living room; the algorithm brings the universe into our pockets.”

Today, we stand in the wash of a similar wave. If the radio brought the world into the living room, the internetโ€”and specifically the generative AI of this decadeโ€”has brought the collective consciousness of humanity into our pockets.

The parallels in infrastructure are just as striking. One hundred years ago, the internal combustion engine was reshaping the physical landscape. The horse was yielding to the Model T; mud paths were being paved into highways. The very geography of how we lived was being rewritten by the automobile. In the 2020s, the “highway” is digital, built on cloud infrastructure and fiber optics, and the vehicle isn’t a Ford, but an algorithm. We are transitioning from physical labor to cognitive automation just as they transitioned from animal labor to mechanical muscle.

The Texture of Time

There is a specific texture to this kind of time. It is a mix of vertigo and acceleration. In 1925, the cultural critic might have worried that the “machine age” was stripping away our humanity, turning men into cogs on an assembly line. In 2025, we worry that the “algorithmic age” is stripping away our agency, turning creativity into a prompt.

But here is the insight that offers me comfort: The 1920s were chaotic, yes, but they were also a crucible of immense creativity. The pressure of that technological change forged modernism in literature, new forms of architecture, and entirely new ways of understanding the universe (quantum mechanics began finding its footing then).

We are not just passive observers of a repeating cycle. We are the navigators of the rhyme. The technology changesโ€”from vacuum tubes to neural networksโ€”but the human task remains the same: to find the signal in the static. To ensure that as the machines get faster, our souls do not merely get cheaper. We must decide, just as they had to a century ago, whether we will be consumed by the roar, or if we will learn to conduct the music.

Categories
AI AI: Large Language Models Investing

From Ink to Insight

There is a distinct friction that exists between the analog world and the digital one. For years, analog notebooks have been the graveyard of good intentionsโ€”lists of books to read, article ideas to write, and companies to investigate, all trapped in the amber of my barely legible handwriting.

I recently found myself looking at one of these lists: a scrawl of company names I had jotted down while reading an article discussing possible companies for investment in 2026. Usually, this is where the work beginsโ€”taking my handwritten notes, typing them out one by one, searching for tickers, opening tabs, etc. It is low-value administrative work that often kills any spark of curiosity before it can turn into useful analysis.

“The barrier to entry for deep research drops to the time it takes to snap a photo.”

On a whim, I snapped a photo and uploaded it to Gemini 3 Pro. “Transcribe this,” I asked. “Give me the tickers.”

I expected errors. My handwriting is, to put it mildly, not easy to read (even for me!).

Instead, the AI didn’t just perform Optical Character Recognition (OCR); it performed contextual recognition. It understood that the scribble resembling “Apl” in a list of businesses was likely Apple, and returned $AAPL. It deciphered the intent behind the ink.

But the real shift happened when I asked Gemini to pivot immediately into research. Within seconds, I went from a static piece of paper to a dynamic analysis of P/E ratios, recent news, and market sentiment. The friction was gone.

This experience wasn’t just about productivity; it was about the fluidity of thought. We are moving toward a reality where the interface between the physical world and digital intelligence is becoming permeable. When the barrier to entry for deep research drops to the time it takes to snap a photo, our curiosity is no longer limited by our patience for data entry. We are free to simply think.