Categories
Aging Citizens Band Radio History Living

The Static We Left Behind

There was a time when the airwaves crackled with a distinct, unpolished kind of magic. It wasn’t the curated broadcast of a corporate radio station, but the raw, spontaneous voices of strangers sharing the same lonely stretch of highway or suburban night. When I previously wrote about the rise and decline of CB radio, I didn’t fully anticipate how deeply the piece would resonate. The influx of emails, comments, and shared memories pointed to a singular, striking truth: we don’t just miss the hardware of the 1970s; we miss the serendipity of the connection it offered.

In the decades since the fiberglass whip antenna faded from the American automotive silhouette, our society has become infinitely more “connected.” We carry glass slabs in our pockets capable of reaching anyone, anywhere, in an instant. Yet, paradoxically, we often find ourselves feeling more profoundly isolated. The modern digital landscape is largely an algorithmic echo chamber, meticulously designed to feed us reflections of what we already know and who we already are.

CB radio, by contrast, was a geographic lottery. You turned the dial, adjusted the squelch, and were instantly thrust into a transient community composed entirely of whoever happened to be within your physical radius. It was messy, chaotic, occasionally absurd, and deeply human. It was a localized town square operating on a 27 MHz frequency.

“We traded the spontaneous for the scheduled. We swapped the local for the global… We traded the crackle of static for the endless, frictionless scroll of the feed.”

Reflecting on the quiet that eventually fell over Channel 19, it becomes clear that the decline of CB radio was more than just a technological shift—it was a cultural one. We traded the spontaneous for the scheduled. We swapped the local for the global, and the intimately anonymous for the hyper-public. We traded the crackle of static for the endless, frictionless scroll of the feed.

But the fundamental human impulse that fueled the CB craze never actually disappeared. The desire to reach out into the dark void and hear a human voice echo back—the spirit of “Breaker 1-9, is anyone out there?”—remains hardwired into our psychology. We see fragmented echoes of it today in late-night Reddit threads, in niche Discord servers, and in the fleeting, unscripted interactions of multiplayer gaming. We are all still, in our own ways, searching for a shared frequency.

Perhaps the true legacy of the CB radio isn’t a cautionary tale of obsolescence, but a gentle reminder. It reminds us that in our highly polished, curated digital world, there is still immense, undeniable value in the unscripted encounter. We haven’t lost the need to connect; we are simply navigating a world with too much noise and too few open channels.

Categories
AI Work

The Centaur’s Dilemma: What Chess Teaches Us About the AI Era

Note: this post was stimulated by a recent conversation between Dario Amedei and Ross Douthat.

In 1998, Garry Kasparov did something unexpected after his historic defeat to IBM’s Deep Blue: he teamed up with the machine. He pioneered “Centaur Chess,” a hybrid format where human intuition merges with cold, silicon calculation. The human acts as the executive, the engine as the raw horsepower. For a time, it was the highest level of chess ever played.

But there is a sobering lesson hidden in the evolution of this game. We are currently living through the workforce equivalent of the Centaur era, and history suggests our “hybrid honeymoon” won’t last forever.

Right now, we are in the augmentation phase. A junior copywriter or coder armed with a Large Language Model can suddenly produce work at a staggering pace. The AI acts as a great equalizer, much like a mediocre chess player with a strong engine beating a Grandmaster in the early 2000s. We are shifting into executive roles—prompting, curating, and orchestrating rather than creating from scratch.

However, in modern Centaur Chess, a chilling reality has emerged: human intervention now yields negative returns. The engines have become so impossibly advanced that when a human overrides Stockfish today, they are almost certainly making a mistake. The human loop, once the ultimate strategic advantage, has become a liability.

This is the “Grandmaster Floor” problem, and it is coming for the job market.

“Eventually, companies may view human oversight not as a ‘value add,’ but as an insurance cost they’d rather cut.”

We are seeing this fracture already. Pure “engine” industries—entry-level data analysis, logistical tracking, basic customer support—are rapidly phasing out the human element because human latency is a drag on the system. Yet, in fields requiring high-stakes moral judgment or empathy, like healthcare or law, the Centaur model remains deeply necessary.

This forces a deeply personal question: How do we stay relevant when the engine eventually solves the game?

The answer lies in recognizing the boundaries of the board. Chess is a closed, finite system. Human life and business are open, messy, and infinitely complex. The survival strategy isn’t to compete on calculation, but to double down on connection, empathy, and problem definition. AI is brilliant at providing the perfect answer, but it fundamentally lacks the soul to know which questions are worth asking.

In the future, the human touch won’t just be a necessity; it will be a luxury. The most valuable skill won’t be navigating the engine, but deciding where the engine should go.

A couple of considerations:

• Take an honest look at your daily work: how much of your time is spent “calculating” (tasks an engine will soon do better) versus “evaluating” (deciding what actually matters)?

• If the technical, process-driven aspects of your job were completely automated tomorrow, what uniquely human value—empathy, context, or connection—would you still bring to the table?

Categories
AI AI: Large Language Models medical

Stethoscopes and Statutes in the Age of AI

David Sparks (aka MacSparky), dropped a casual bombshell on a recent podcast, the kind of offhand remark that lodges in your mind like a burr on a sock.

Paraphrasing, he said something like: “AI seems to be a boon for doctors and a threat to lawyers.” He was commenting on how he’s observed that sense among the members of his MacSparky Labs community.

It’s the sort of statement that invites you to pause, tilt your head, and wonder what lies beneath.

Sparks, a lawyer himself who gave up his legal career a few years ago, knows one of those worlds intimately. His words carry the weight of someone who’s walked the halls of courthouses and squinted at screens late into the night.

So what’s he pointing out that the rest of us might miss?

Start with doctors. Medicine is a profession of patterns and particulars, a dance between the general and the specific. A patient walks in—say, a 52-year-old man with a cough that’s lingered too long. The doctor’s mind whirs: pneumonia? Bronchitis? Something rarer, like sarcoidosis? The human brain is a marvel at this, but it’s not infallible. Enter AI, with its tireless capacity to sift through terabytes of data—X-rays, lab results, decades of case studies—and spot the needle in the haystack. A tool like Harvey, an AI platform now making waves in medical research, can crunch genetic sequences or flag anomalies in real time, handing doctors a sharper lens. It’s not replacing the physician; it’s amplifying her reach. For doctors, AI is like a stethoscope that’s upgraded.

Lawyers, though, face a different challenge. Their craft is less about data and more about argument, a tapestry of precedent and persuasion woven over centuries. Sparks knows this: he’s stood before judges, parsing statutes, coaxing juries with a turn of phrase. But here’s the rub—much of lawyering is rote. Drafting contracts, reviewing discovery, chasing down case law—these are tasks of repetition, not revelation. AI can do them faster, cheaper, and with fewer coffee stains. Harvey, repurposed for legal work, joins programs like ROSS, built on IBM’s Watson, to scan legal databases in seconds, spitting out answers that once took associates hours to unearth. For the grunt work, AI is a scythe through wheat. The threat isn’t extinction but erosion—junior lawyers, the ones who cut their teeth on those late-night searches, might find the ladder’s lower rungs sawed off.

Yet law isn’t just mechanics; it’s theater. A machine can draft a motion, but can it read a juror’s furrowed brow? Can it pivot mid-trial when a witness veers off script?

Doctors heal with facts; lawyers win with stories. AI—Harvey or otherwise—might streamline the former, but the latter resists its grasp—for now. Sparks sees a fault line: medicine gains an important new partner, law sees a new rival.