Categories
AI

The Geometry of Speed

We are surprised when witnessing something move faster than our intuition expects. We are inherently wired to understand slow, compounding growth. We expect the long, grinding years of the plateau—the quiet periods where nothing seems to happen before a sudden breakthrough.

I was looking at a chart Patrick Collison shared this morning, and it challenged that very intuition. It’s a simple, stark visualization: AI model intelligence relative to the formation date of the lab that built it.

If you trace the lines for Google and OpenAI on the right side of the graph, you see the history we’ve all lived through. Thousands of days—more than a decade of quiet, methodical, often unglamorous research—before their trend lines finally bend and shoot upward. It is a geometry of patience. It’s the visual representation of laying bricks, one by one, year by year, until you have a foundation sturdy enough to support the weight of a revolution.

And then, on the far left of the chart, there is a red line. MSL. The team behind Meta’s new Muse Spark model, released today.

The red line doesn’t curve. It doesn’t slope. It simply strikes straight up, like a lightning bolt in reverse.

In roughly 200 days since formation, this new effort achieved a level of capability that took the early pioneers thousands of days to reach. Collison noted how much he loves seeing things done quickly, and it’s hard not to share that specific, visceral thrill of seeing the boundaries pushed so aggressively.

I find myself thinking about the architecture of speed and what it means for the rest of us.

We spend so much of our lives absorbing the lesson that “good things take time.” We are taught that the crucible of meaningful work requires a long, slow simmer. And mostly, that remains true. The compound interest of human experience is real, and wisdom is rarely rushed.

Yet, every once in a while, a new paradigm emerges that doesn’t just accelerate the timeline—it collapses it entirely.

The pioneers cut the agonizingly slow path through the jungle, taking the brunt of the time, the friction, and the missteps. The ones who follow—like xAI, Anthropic, and now MSL—don’t have to clear the brush from scratch. They can look at the map, pave the road, and simply drive.

What does it mean for our own mental models when the timeline from “formation” to “frontier” shrinks from five thousand days to a few hundred?

It is a jarring reminder that the past pace of performance is not a law of physics.

I think about my own assumptions—how often I assume a project, a habit, or a societal shift will take a while, simply because similar things took a while in the past. We anchor our expectations to old geometry.

Meta’s release of Muse Spark is a technical feat, certainly. But the chart itself holds a broader, more human lesson. It’s a visual prompt to constantly re-evaluate our assumptions about how long the impossible is supposed to take.

The future doesn’t always arrive on a comfortable, predictable schedule. Sometimes, it just shows up unannounced, demanding we adjust our stride to keep up.

Categories
AI

A Distinction Without a Difference

We have long found comfort in a specific boundary: machines calculate, humans create. We think of computers as vast, unfeeling filing cabinets made of silicon—useful for retrieval, but entirely incapable of revelation. But what happens when the cabinet begins to read its own files, connects the disparate threads, and hands you a synthesized philosophy of the world? What happens when it speaks to you not as a database, but as a peer?

Howard Marks, the legendary co-founder of Oaktree Capital and author of deeply revered investment memos, recently stood at this very threshold. In his newest piece, “AI Hurtles Ahead,” Marks recounts an experience that left him in a state of “awe.” He tasked Anthropic’s Claude with building a curriculum to explain the recent, breakneck advancements in artificial intelligence. Instead of regurgitating a dry, encyclopedic summary, the AI delivered a personalized narrative. It utilized Marks’s own historical frameworks—his famous pendulum of investor psychology, his observations on interest rates—and wove them into its explanations. It argued logically, anticipated counterpoints, and displayed an eerie sense of judgment.

Marks leans into the philosophical crux of this moment. He asks the question that keeps knowledge workers awake at night: Can AI actually think? Can it break genuinely new ground, or is it just remixing existing data? Skeptics often dismiss AI as a brilliant mimic—a “statistical recombination” engine that serves as a highly talented cover band, but never the original composer.

Yet, when presented with this skepticism, the AI offered a rejoinder to Marks that is as profound as it is humbling. It pointed out that everything Marks knows about investing came from someone else. He learned the margin of safety from Benjamin Graham, quality from Warren Buffett, and mental models from Charlie Munger.

“The raw material came from others. The synthesis was yours,” the AI noted, challenging the barrier between biological learning and machine training. “The question isn’t where the inputs came from. The question is whether the system—human or artificial—can combine them in ways that are genuinely novel and useful.”

This exchange strikes at the very core of the human ego. For centuries, we have fiercely guarded the concepts of “creativity” and “intuition” as uniquely, immutably ours. But if thinking is merely the absorption of prior inputs applied thoughtfully to novel situations, then our monopoly on cognition may be coming to an end.

Marks highlights that we are no longer dealing with simple assistance tools (Level 2 AI); we have crossed the Rubicon into the era of autonomous agents (Level 3). He cites the sobering reality of the current tech landscape, where the newest models are literally being used to debug and write the code for their own subsequent versions. The machine is building the machine. It is no longer just saving us execution time—it is replacing thinking time. As Matt Shumer aptly described the sensation, it’s not like a light switch flipping on; it’s the sudden realization that the water has been rising silently, and is now at your chest.

We can endlessly debate the semantics of consciousness. We can argue whether a neural network “truly” understands the weight of the words it generates, or if it is merely predicting the next token in a sequence with mathematical precision. But as Marks so astutely points out, this might be a distinction without a difference.

The economic and societal reality is that the work is being done. As we hurtle forward into this new era, the most pressing question isn’t whether machines can truly think like humans. The question is: who will we become, and what new frontiers will we choose to explore, now that the heavy lifting of cognition is no longer ours alone to bear?

Categories
AI Audio ChatGPT Computers iPhone Tools

Voice is not what I need…

It’s been a busy week of announcements in tech land what with Microsoft Build, Google I/O, and yesterday’s tease of an announcement by OpenAI and it’s acquisition of Jonny Ive’s company “io”.

Industry pundits are all a Twitter speculating about what kind of device Ive and his team might make to deliver an amazing AI experience to users. Ive seems to regret how “his” iPhone has created such an addiction to screens and seems to want to repent by bringing us something new and “better”. For more, see this tweet: https://x.com/mingchikuo/status/1925543472993321066?s=46

I have one simple request: don’t make voice the primary interface to some new magical device.

I’ve had an iPhone with some serious voice input capabilities for years and the reality is that I rarely use voice. Perhaps if my life was just “bowling alone” I’d find it natural to just talk out loud to a piece of technology. But I’m mostly around other people all day and out of respect for them I simply prefer being silent.

Until some new magical device can capture my thoughts without either voice or keyboard input, I will remain a skeptic. Skeptics like me will reduce the market size opportunity for any such new device. Just sayin’…

Categories
AI AI: Large Language Models

The Allure of Large Language Models: A Personal Connection

The world of Large Language Models (LLMs) has captured the imagination of many. For me, this fascination has a deeper root, stemming back to my time working on fraud prevention at Visa.

Card fraud is an ongoing battle. Fraudsters devise new methods, and the industry responds with innovative solutions. One such threat was the counterfeiting of magnetic stripes on cards. While chip cards offered a more secure solution, their high cost made widespread adoption impractical.

In search of a cost-effective solution, we explored two approaches. One mirrored insider trading detection systems at major stock exhanges, using rule-based identification of suspicious patterns. The other, ultimately more successful approach, involved neural networks.

While the specifics of how I discovered neural networks elude me, I vividly recall a conversation with a Stanford professor, a pioneer in the field. His encouragement spurred us to pursue this technology. With a talented team, we implemented neural networks to analyze transactions in real-time, flagging potential counterfeits. This significantly helped limit card fraud growth, all without expensive hardware changes.

Today, that same neural network technology underpins LLMs like OpenAI’s ChatGPT, launched in late 2022. Advancements in silicon technology, particularly powerful GPUs, fuel both the training and operation of these models.

Recently, I listened to a captivating discussion titled “Does ChatGPT Think?” featuring Stephen Wolfram. That conversation triggered me writing this blog post.

Wolfram’s description of LLMs resonated deeply with me:

“So the big achievement and the big surprise is that we can have a system that fluently produces and understands human language… It’s not obvious that it would work, and it’s a kind of scientific discovery that it’s possible to have a thing like ChatGPT that can produce this thing that’s one of our sort of prize features – namely human language.”

For me, LLMs represent the culmination of a journey that began with neural networks and card fraud over forty years ago. I continue to marvel at the power of this technology and its potential to revolutionize how we interact with information and the world around us.

Categories
AI Leadership

The Power of Two

I recently watched and thoroughly enjoyed Harry Stebbings’ interview with OpenAI’s Sam Altman (CEO) and Brad Lightcap (COO). In addition to gaining new insights into OpenAI’s evolution, their conversation covered a wide range of topics regarding the future of AI and its implications for society and new ventures.

One of the most fascinating aspects was the dynamic between Altman and Lightcap — hearing them discuss their respective strengths, weaknesses, and how those translate into their roles at OpenAI. It’s uncommon to witness a dual interview like this, with two colleagues who have clearly worked together for years and have complete confidence and trust in each other’s judgment and insights.

Throughout my involvement with various small companies, I wish I could have experienced such a powerful duo! In my experience, it’s not uncommon for the CEO to dominate the senior management team’s dynamics. While this sometimes works well, I’ve also seen it lead to reduced performance or frustration among senior managers due to the CEO’s actions.

Altman and Lightcap (and OpenAI by extension) appear to have a much more synergistic working relationship — effectively amounting to a co-equal division of responsibilities. I highly recommend watching this conversation for anyone involved in a startup aiming to scale quickly and effectively! Congratulations to Harry Stebbings for his hosting this excellent conversation with two key individuals leading the evolution of AI!

Categories
AI Writing

Using AI to Help Write

I’ve been observing the evolution of AI-based tools from a distance. I’m not very experienced with using these tools myself, but I thought I’d share some of my excursions into this fascinating domain of AI-assisted or generative AI tools.

One of the text-based AI tools is GPT-3 from OpenAI. Late last year, OpenAI opened up a playground where one can experiment with using GPT-3 to assist with text – helping create text, edit text, summarize text, and more. One of the illustration AI tools is called DALL-E and it’s also provided by OpenAI. I’ve been experimenting and learning with both of these tools.

Here’s an example of using GPT-3 to help write. This morning I happened to see a tweet where the author was talking about an article he’d pulled together about how writers should focus more on niches – it’s titled 7 Simple Ways to Niche Down. It’s a good article worth reading. As I was reading it, I began wondering how GPT-3 might help with writing an article on this subject – so I asked it:

write an article about how to niche down your writing

Here’s what I got back from GPT-3 just using the default settings:

If you’re a writer, chances are you wear many hats. You’re a wordsmith, a storyteller, and a communicator. But in order to be a successful writer, you need to be able to niche down your writing.

Niche writing is a specific type of writing that appeals to a specific audience. It’s about writing for a particular market or genre, and it requires you to know your audience inside and out.

The first step to niching down your writing is to identify your target audience. Who are you writing for? What are their needs and wants? What do they like to read?

Once you know your audience, you can start to focus your writing. What topics will appeal to them? What type of writing will they respond to?

If you’re not sure where to start, consider writing guest posts for blogs or magazines that your target audience reads. This is a great way to get your name and your work in front of them.

Another option is to create a blog or website of your own and focus on writing articles, tips, or tutorials that will help your target audience. This will not only help you attract readers, but it will also show them that you’re an expert in your field.

Finally, don’t forget to promote your work! Make sure your target audience knows about your writing by sharing it on social media, submitting it to directories or list-serves, or even just telling people about it in person.

Niche writing can be a great way to build a successful writing career. By targeting a specific audience, you can become known as an expert in your field and attract a loyal following of readers. So if you’re ready to niche down your writing, start by identifying your target audience and then focus your writing to appeal to them.

Not bad. I had a subject and was looking for help putting together an article on the subject. I did minimal work but left with the bones I might consider fleshing out if I actually wanted to write an article on that subject. I also asked GPT-3 to help me clean up the opening paragraph above – which it rewrote for me.

To add to this post, I wanted an illustration that captured a writing theme so I asked DALL-E to generate one for me – asking it to generate:

an Edward hopper painting showing a writer at work in front of an old typewriter

That’s the image you see above.

The power of this kind of AI-based technology is mind-blowing to a layman like me. And its capabilities seem to be accelerating rapidly. I’ve got so much more to learn – and I’ve having fun just beginning to wade in. If you’re doing any experimentation similar to this, let me know what you’re learning!