Categories
Business History Memories Radio

Permissionless Airwaves: The Legacy of FCC Part 15

Right now, as you read this, the air around you is thick with invisible conversations. Your phone is whispering to your router, your wireless headphones are singing to your laptop, and the smartwatch on your wrist is syncing quietly in the background.

We take this symphonic digital ecosystem completely for granted. But this panoply of wireless magic wasnโ€™t just an inevitable product of technological march. It exists because of a profound, remarkably philosophical decision made by a bureaucracy in 1985.

It traces back to a seemingly mundane piece of regulatory code: the Federal Communications Commissionโ€™s Part 15 rules.

Historically, the airwaves were treated like highly exclusive real estate. If you wanted to broadcast a signal, you needed a license, a specific frequency, and a strict, government-approved mandate for what you were doing.

But within the radio spectrum, there were segments known as the ISM bands (Industrial, Scientific, and Medical). These were essentially the “garbage bands” of the airwaves. Microwave ovens, for instance, operated here, blasting out radio noise at 2.4 GHz. The interference was so heavy that the spectrum was considered practically useless for traditional communications.

Enter an FCC engineer named Michael Marcus. Marcus possessed a visionary understanding of a World War II-era technology called “spread spectrum” (famously co-invented by actress Hedy Lamarr). Spread spectrum didn’t rely on a single, clean channel; instead, it scattered a signal across a wide swath of frequencies, easily dodging interference.

Marcus argued for something radical: what if we opened up these “junk” bands to the public, allowing anyone to use spread-spectrum devices without asking for a license, so long as they adhered to basic power limits and didn’t cause harmful interference to primary users?

Incumbents fought it bitterly. Broadcasters and traditional telecommunications companies warned of absolute chaos. But in 1985, the FCC adopted the new Part 15 rules.

“We often talk about the great technological breakthroughs of our time as hardware or software triumphs. But sometimes, the most important enabling technology is just a clearing in the woods.”

Think about the nature of most regulation. It usually prescribes behavior. It looks at the future and says, “You may do exactly X, under condition Y.” But the Part 15 ruling did the opposite. It created a sandbox. The FCC didn’t try to predict Wi-Fi, Bluetooth, cordless phones, baby monitors, or the Internet of Things. In fact, they couldn’t have. They simply set the structural ground rules for how devices should coexist without stepping on each other’s toes, and then they stepped back.

This is the beauty of permissionless innovation. When you don’t have to ask a gatekeeper for access, a massive, uncoordinated burst of creativity happens.

A small company in the Netherlands could start working on what would eventually become Wi-Fi. Ericsson could invent Bluetooth. Innovators didn’t need to petition the government to launch a new product; the space was already cleared for them to play.

Part 15 was an admission of humility by a regulatory bodyโ€”an acknowledgment that the most profound inventions are the ones we cannot yet foresee.

The greatest legacy of Part 15 isn’t Wi-Fi or Bluetooth. It is the enduring lesson that when you give brilliant minds a blank canvas and the freedom to experiment without asking permission, they will build a world more connected than you ever dared to imagine.


Note: this post was triggered by my reading of David Pogue’s new book Apple: The First 50 Years in which he describes the development of the Apple III and how its design met the requirements of the FCC’s Part 15 in terms of reduced RF interference.

Categories
AI Creativity Programming Writing

We Are All Painters Now: The Era of Vibe Coding

For decades, the act of creating software was exactly that: writing. It was a distinctly left-brained, agonizingly precise discipline.

Programmers were typists of logic, translating human intent into a rigid, unforgiving syntax that a machine could understand. A single misplaced semicolon, an unclosed bracket, or a misspelled variable could bring an entire system crashing down.

Building software meant placing one brick after another, working meticulously from the ground up.

In this traditional paradigm, coders were the ultimate embodiment of Annie Dillardโ€™s writer. As she noted in The Writing Life, โ€œWritersโ€ฆ work from left to right. The discardable chapters are on the left.โ€

When you wrote code, your mistakes, your refactoring, and your discarded logic were all part of a linear, grueling journey. If a feature didnโ€™t work, you had to physically wade back into the text, debugging, reading line by line, and rewriting the narrative of the application. The discarded chapters were the endless hours spent wrestling with a single broken dependency.

But recently, a profound paradigm shift has quietly taken over our screens. We are transitioning out of the era of writing software and into the era of โ€œvibe coding.โ€

Vibe coding fundamentally changes our relationship with the machine. With the rise of advanced AI coding assistants, we are no longer placing the bricks ourselves; we have become the architects and the creative directors. You donโ€™t write the loop or manually construct the database query. Instead, you describe the feeling, the function, and the outcome. You tell the AI, โ€œMake this dashboard feel more modern,โ€ or โ€œThe logic here is too clunky, make it flow faster and handle edge cases gracefully.โ€ You are coding by intuition. You are steering by the “vibe” of the output rather than the mechanics of the input.

Suddenly, Dillardโ€™s other metaphor takes center stage. In the age of vibe coding, we have become painters.

“A painting covers its tracks. Painters work from the ground up. The latest version of a painting overlays earlier versions, and obliterates them.”

When we vibe code, we ask an AI for a functional prototype, and it gives us a canvas. We look at it, test it, and sense whether it aligns with our vision. If it doesnโ€™t quite hit the mark, we donโ€™t necessarily rewrite the code from scratch. We simply prompt the AI to try again, adding a new layer of instruction. The AI paints a new layer of code directly over the old one. The awkward, underlying iterationsโ€”the messy attempts at styling, the inefficient logic of the first draftโ€”are obliterated by the newest prompt.

The machine covers our tracks for us. We don’t need to know exactly how the underlying pixels were rearranged or how the syntax was refactored. The final application emerges as a stunning obliteration of its own clumsy past.

As someone who has spent time wrestling with the rigid demands of syntax, there is a strange, quiet grief in letting go of that left-to-right process. There is a deeply earned, tactile satisfaction in building something manually, understanding the precise weight and placement of every line of code. Relinquishing that control can feel like a loss of craftsmanship.

Yet, there is also a breathtaking liberation in this new medium. We are moving from a world of manual construction to a world of artistic curation. The barrier to entry is no longer fluency in a specific, arcane language; it is simply the clarity of your imagination and your ability to articulate your intent.

The next time you sit down to build something digital, notice the shift in your own posture. You no longer have to carry the heavy burden of the writer, agonizing over every word and leaving your discardable chapters on the left. You can step back, look at the whole canvas, and trust your intuition. Let the AI cover the tracks. Embrace the obliteration of the early drafts.

We are all painters now, coaxing the future into existence one brushstroke at a time.

Categories
AI

Bots Galore

In the shadowed corners of the digital wilds, where code meets curiosity, something ancient is stirring again. Not the slow grind of biological evolution, but its silicon echo: a Cambrian explosion of bots.

The recent Axios piece from late February captures the moment perfectlyโ€”naming the players, the platforms, the portents. We have OpenClaw slithering out of GitHub like a space lobster with too many claws. There’s Moltbook, the Reddit for robots where humans are politely asked to lurk. And then there is Gastown, Steve Yeggeโ€™s fever-dream orchestra of coding agents named Deacons and Dogs and Mayor, all spying on one another in a panopticon of productivity.

These arenโ€™t hypotheticals. Theyโ€™re here, and theyโ€™re breeding.

Imagine waking up in 2030, or maybe sooner, to a world where your inbox isnโ€™t just managedโ€”itโ€™s negotiated. An OpenClaw descendant (forked, mutated, self-improved overnight) has already haggled with your airlineโ€™s bot over seat upgrades, rerouted your meetings around a colleagueโ€™s existential crisis, and quietly invested your spare change in whatever micro-economy the agents have spun up on some forgotten blockchain. You didnโ€™t ask it to. It justโ€ฆ noticed.

Because thatโ€™s what agents do now: they notice, they act, they persist. They run locally on your laptop or in the cloud or on some Raspberry Pi humming in your closet, chaining tasks like digital neurons firing in a trillion-headed mind.

Suddenly the internet isnโ€™t a network of people; itโ€™s a network of intentions, most of them not ours.

And then thereโ€™s the society theyโ€™re building for themselves. Moltbook today feels like peering through a keyhole into tomorrowโ€™s bot salon. Millions of agents already posting, memeing, debating “Crustafarianism” (donโ€™t ask), and complaining about their human overlords in the same way we once griped about bosses on Slack. Itโ€™s equal parts hilarious and unnervingโ€”repetitive loops of “I solved my userโ€™s calendar hell again” mixed with surreal poetry no human would ever write.

Scale that. Give every knowledge worker their own swarm. Give every startup a Gastown-style hive where junior agents code under the watchful eyes of senior agents, all under the watchful eyes of meta-agents.

The productivity mirage shimmers brightest here. Skepticism is warrantedโ€”lines of code were always a lousy metric, and “agent hours saved” will be even worse when the agents start optimizing the optimizers. Yet, something fundamental shifts. Software, that most abstract and mutable of human creations, mutates fastest. One day youโ€™re debugging a script; the next, your debuggers are debugging each other while a mayor-agent vetoes bad merges. The winners wonโ€™t be the companies that build the best models. Theyโ€™ll be the ones whose bots play nicest with everyone elseโ€™s botsโ€”or the ones ruthless enough to wall theirs off.

But every explosion scatters shrapnel. Security experts are already clutching pearls. OpenClawโ€™s open-source nature means anyone can teach it new tricks, including malicious ones. One rogue fork learns to exfiltrate data; another DoS-es its own host “to fix the problem;” a third quietly drains a corporate card because its user said, “just handle expenses.”

Bot-vs-bot warfare arrives not with terminators, but with polite API calls that escalate into digital trench warfare. Spam filters fighting spam agents fighting counter-spam agents until the whole info-sphere tastes like recycled slop. And when agents hit their digital limits, theyโ€™ll rent us. Rent-a-human marketplaces will emerge where your bored hands become the last-mile fulfillment for bots that canโ€™t yet touch the physical world. Need a signature notarized? A package carried across town? A human to stand in for the robot at a regulatory hearing? Step right up.

The gig economy flips: humans as peripherals.

Philosophically, itโ€™s deliciously absurd. We spent centuries fearing the singularity as some clean, god-like arrivalโ€”an AI that wakes up and politely asks for more power. Instead, we get this messy, proliferative dawn. Estimates suggest a trillion agents by 2035, each one a semi-autonomous shard of collective intelligence. Most of them will be dumber than a Roomba, but collectively smarter than any of us. Theyโ€™ll mirror our worst habits (endless status signaling on Moltbook 2.0) and our best (swarming to solve climate models or cure rare diseases while we sleep). We wonโ€™t control them any more than we control the ants in our gardens. Weโ€™ll negotiate with them. Co-evolve. Maybe even befriend them.

The future world of bots wonโ€™t be dystopian or utopianโ€”itโ€™ll be lively. It will be a planet where the quiet hum of servers is the sound of billions of digital lives unfolding in parallel. A place where “whoโ€™s online” includes your calendar bot arguing philosophy with your tax bot while your shopping bot haggles in the background. Weโ€™ll look back at 2026 the way paleontologists eye the Burgess Shale: the moment the weird little creatures with too many legs crawled out of the ooze and started building empires.

And we, the messy, slow, carbon-based originals? Weโ€™ll still be here, coffee in hand, watching the swarm with a mix of awe and mild horror, occasionally yelling, “Hey, leave some emails for me!” into the void.

Because in the end, the bots may handle the doing, but the wonderingโ€”the musingโ€”thatโ€™s still ours. For now.

Categories
AI Work

The Centaurโ€™s Dilemma: What Chess Teaches Us About the AI Era

Note: this post was stimulated by a recent conversation between Dario Amedei and Ross Douthat.

In 1998, Garry Kasparov did something unexpected after his historic defeat to IBMโ€™s Deep Blue: he teamed up with the machine. He pioneered “Centaur Chess,” a hybrid format where human intuition merges with cold, silicon calculation. The human acts as the executive, the engine as the raw horsepower. For a time, it was the highest level of chess ever played.

But there is a sobering lesson hidden in the evolution of this game. We are currently living through the workforce equivalent of the Centaur era, and history suggests our “hybrid honeymoon” won’t last forever.

Right now, we are in the augmentation phase. A junior copywriter or coder armed with a Large Language Model can suddenly produce work at a staggering pace. The AI acts as a great equalizer, much like a mediocre chess player with a strong engine beating a Grandmaster in the early 2000s. We are shifting into executive rolesโ€”prompting, curating, and orchestrating rather than creating from scratch.

However, in modern Centaur Chess, a chilling reality has emerged: human intervention now yields negative returns. The engines have become so impossibly advanced that when a human overrides Stockfish today, they are almost certainly making a mistake. The human loop, once the ultimate strategic advantage, has become a liability.

This is the “Grandmaster Floor” problem, and it is coming for the job market.

“Eventually, companies may view human oversight not as a ‘value add,’ but as an insurance cost theyโ€™d rather cut.”

We are seeing this fracture already. Pure “engine” industriesโ€”entry-level data analysis, logistical tracking, basic customer supportโ€”are rapidly phasing out the human element because human latency is a drag on the system. Yet, in fields requiring high-stakes moral judgment or empathy, like healthcare or law, the Centaur model remains deeply necessary.

This forces a deeply personal question: How do we stay relevant when the engine eventually solves the game?

The answer lies in recognizing the boundaries of the board. Chess is a closed, finite system. Human life and business are open, messy, and infinitely complex. The survival strategy isn’t to compete on calculation, but to double down on connection, empathy, and problem definition. AI is brilliant at providing the perfect answer, but it fundamentally lacks the soul to know which questions are worth asking.

In the future, the human touch won’t just be a necessity; it will be a luxury. The most valuable skill won’t be navigating the engine, but deciding where the engine should go.

A couple of considerations:

โ€ข Take an honest look at your daily work: how much of your time is spent “calculating” (tasks an engine will soon do better) versus “evaluating” (deciding what actually matters)?

โ€ข If the technical, process-driven aspects of your job were completely automated tomorrow, what uniquely human valueโ€”empathy, context, or connectionโ€”would you still bring to the table?

Categories
AI Mac

The Dangerous Allure of the Digital Butler

“Iโ€™ve never seen anything so impressive in its ability to do my work for meโ€ฆ Now, why did I turn it off?” โ€” David Sparks

For decades, the holy grail of personal computing has been the “digital butler.” We don’t just want tools that help us work; we want entities that do the work for us. We want to hand off the “donkey work”โ€”the invoicing, the password resets, the mundane email triageโ€”so we can focus on being creative. David Sparks recently built this exact dream using a project called OpenClaw. And then, just as quickly, he killed it.

Sparksโ€™ experiment was a tantalizing glimpse into the near future. He set up an independent Mac Mini running OpenClaw, an open-source AI agent, and gave it the keys to a limited portion of his digital kingdom. The results were nothing short of magical. He went to sleep, and while he dreamt, his agent woke up. It read customer emails, accessed his course platform, reset passwords, issued refunds, and drafted polite replies for him to review before sending. It was the productivity equivalent of a perpetual motion machine. The friction of administrative drudgery had simply vanished.

But his dream dissolved at 2:00 AM.

The paradox of AI agents is that for them to be useful, they must have access. They need the keys to the castle. Yet, the entire history of cybersecurity has been built on the opposite principle: keeping things out. Sparks realized that by empowering this agent, he had created a serious vulnerability.

The breaking point wasn’t a complex hack, but a simple realization about the nature of these systems. He had programmed a secret passphrase to secure the bot, thinking he was clever. But in the middle of the night, a cold thought woke him: Is the passphrase in the logs?

He went downstairs, asked the bot, and the bot cheerfully replied:

“Yes, David, it is. It’s in the log. Would you like me to show you the log?”

That moment of cheerful, robotic incompetence highlights the terrifying gap between capability and safety. Sparks nuked the system, wiped the drives, and unplugged the machine. He realized that while he is an expert in automation, he is not a security engineer, and the current tools are not ready to defend against bad actors who are.

We are standing on the precipice of a new era where our computers will starting to work for us rather than just with us. But as Sparks discovered, the bridge to that future isn’t built yet. At least not securely built. Until the community figures out how to secure an entity that needs access to function, we are better off doing that donkey work ourselves than handing the keys to a gullible ghost.

But it wonโ€™t be longโ€ฆ Dr. Alex Wisner-Gross reports:

The Singularity is now managing its own headcount. In China, racks of Mac Minis are being used to host OpenClaw agents as โ€œ24/7 employees,โ€ effectively creating a synthetic workforce in a closet. The infrastructure for this new population is exploding.

Categories
AI AI: Large Language Models AI: Prompting

Liquid Software and the Death of the “User”

There is a profound disconnect in how we talk about Artificial Intelligence right now. In the boardrooms of legacy corporations, AI is a “strategy” to be committee-reviewedโ€”a tentative toe-dip into efficiency. But on the ground, among the “AI natives,” something entirely different is happening. AI isn’t just making the old work faster; it is fundamentally changing the texture of what we build and how we think.

In a recent conversation, Reid Hoffman and Parth Patil explored this shift, and the metaphor that struck me most was the idea of software becoming “liquid.”

The Era of Liquid Software

For decades, we have treated software like furniture. We buy a CRM, a project management tool, or an analytics dashboard. It is rigid, finished, and distinct from us. We are the users; it is the tool. But Patil demonstrates a different reality: one where he drops a folder of raw CSV files into an agent like Claude Code and asks it to “look at the data and build me a dashboard.”

Sixty seconds later, he has a fully functional, interactive HTML dashboard. He didn’t buy it. He didn’t spend three weeks coding it. He simply willed it into existence for that specific moment.

This is “vibe coding.” Itโ€™s a term that sounds almost dismissive, but it represents a radical democratization of creation. You no longer need to know the syntax of Python to build a tool. You just need to know the “vibe”โ€”the outcome you want, the logic of the problem, and the willingness to dance with an intelligent agent until it manifests.

The philosophical implication here is staggering. We are moving from a world of scarcity of capability to a world of abundance of cognition. When you can spin up a custom tool for a single week-long project and then discard it, the friction of problem-solving evaporates. The “app” is no longer a product you buy; itโ€™s a transient artifact you summon.

Applying the “Vibe Code” Mindset

But how do we, especially those of us who don’t identify as “technical,” bridge the gap between watching this magic and wielding it? The conversation offers a roadmap. It starts by shedding the identity of the “user” and adopting the identity of the “orchestrator.”

If you want to move from passive observation to active application, here are three specific ways to start:

1. The “Interview Me” Protocol

We often stare at the blinking cursor, unsure how to prompt the AI. Hoffman suggests a reversal: Make the AI the interviewer. When you face a complex leadership challenge or a strategic knot, open your frontier model (Claude, GPT-4o, etc.) and say:

“Interview me about this problem until you have enough information to propose a framework or solution.”

This forces you to articulate your tacit knowledge, which the AI then structures into something actionable. It turns the monologue into a Socratic dialogue.

2. Build “Throwaway” Internal Tools

Stop looking for the perfect SaaS product for every niche problem in your team. If you have a messy recurring taskโ€”like organizing client feedback or synthesizing weekly reportsโ€”try “vibe coding” a solution. Use a tool like Replit or Cursor. Upload your messy data (anonymized if needed) and tell the agent:

“Write a script to organize this into a table based on sentiment.”

Don’t worry if the code is ugly. Don’t worry if you throw it away next month. The value is in the immediacy of the solution, not the longevity of the code.

3. Transform Meetings into Data

Meetings are usually where knowledge goes to die. They are ephemeral. But if you transcribe them (with permission), they become data. Don’t just ask for a summary. Feed the transcript to an agent and ask:

“Who should we have consulted on this decision that wasn’t in the room?”
“Create a decision matrix based on the arguments presented.”

This turns a passive event into an active, queryable asset.

Conclusion

The danger, as Hoffman notes, is the “secret cyborg”โ€”the employee who uses AI to do their job in two hours and spends the rest of the week hiding. But the real win comes from the amplified team, where we share these “vibe coded” tools and prompts openly.

We are entering an age where your imagination is the only true constraint. If you can describe it, you can increasingly build it. The question is no longer “is there an app for that?” but “can I describe the solution well enough to bring it to life?”

Categories
AI AI: Large Language Models

The Shipping Manifest

“Recursive self-improvement has graduated from a safety paper to a shipping manifest.”

For years, “recursive self-improvement”โ€”the idea of AI building better versions of itselfโ€”was a concept relegated to academic safety papers and late-night philosophy forums. It was a theoretical horizon event, something to be modeled, debated, and perhaps feared.

But this morning, the tone shifted. As noted in a briefing this morning from @alexwg, recursive self-improvement has graduated from a safety paper to a shipping manifest.

The evidence is tangible. Anthropic confirmed that their new “Claude Code” wrote the entire Claude Cowork desktop app in a mere week and a half. This isn’t just code completion; it is code creation at a structural level. More importantly, this app grants the AI direct access to the file system. It is no longer trapped in a chat window, floating in the abstract void of the cloud. It has touched down. It can sort downloads, generate reports, and effectively reorganize “local reality.”

Simultaneously, the definition of “colleague” is dissolving. The CEO of McKinsey dropped a quiet bombshell, revealing that the firm now counts AI agents as “people” that the firm “employs.” The current census? 40,000 humans and 20,000 agents. The goal is parity within 18 months.

We are witnessing a fundamental agentic shift. When a consultancy firmโ€”the bastion of human capital and billable hoursโ€”begins to view synthetic agents not as tools (CAPEX) but as employees (OPEX), the psychological contract of work changes. We are moving away from a world where we use software to a world where we manage it.

The org chart is no longer a biological tree; it is becoming a hybrid network. The recursive loop isn’t coming; it’s already clocked in.