Categories
Authors Storytelling Writing

The Architecture of Resonance

There’s a particular kind of madness that strikes writers late at night, or in the stagnant hours of mid-afternoon, when you find yourself staring at a single sentence for twenty minutes.

You’re weighing a semicolon against an em dash. You’re wondering if “murmur” is too soft or if “whisper” is too cliché. All of this while knowing, with complete certainty, that no reader will ever stop to appreciate this specific choice. They’ll just read the sentence and move on.

So why do we do it?

In Draft No. 4, John McPhee — the legendary literary journalist who spent decades at The New Yorker — shares a principle he still writes on the blackboard at Princeton. It’s actually a quote from Cary Grant: “A Thousand Details Add Up to One Impression.” The implication, McPhee explains, is that almost no individual detail is essential, while the details as a whole are absolutely essential.

I find this idea endlessly useful. And a little reassuring.

Think about walking into a beautifully designed home. You don’t notice the precise angle of the crown molding or the specific undertones of the paint. You don’t walk in and say, “Ah yes, Alabaster White.” You just feel warmth, or elegance, or comfort. The impression is singular — but it’s entirely built from a thousand invisible decisions someone made before you arrived.

Writing works the same way. The rhythm of your sentences, the specificity of your verbs, the way a paragraph ends — these are the details. Individually, they’re expendable. Swap “murmur” for “whisper” and the piece survives. Delete the semicolon and the world keeps turning.

But collectively, they are the piece.

Start compromising — reach for the easy cliché, let a clunky transition slide, settle for vague where you could be specific — and the foundation slowly rots. The reader won’t be able to name the moment they lost interest. They’ll just close the tab. The impression shifts from resonant to flat, without anyone quite knowing why.

Writing, then, is an act of quiet faith. It asks you to labor over things no one will applaud. Nobody claps for an em dash. But the work isn’t really for applause — it’s out of respect for the whole.

We curate a thousand invisible things so the reader can feel one visible truth.

So the next time you’re agonizing over a single word at midnight, remember: you’re not just picking a word. You’re placing a tile in a mosaic. Cary Grant understood it. McPhee put it on a blackboard. You might as well make it count.

Categories
AI Creativity Writing

Did You Really Program That?

The Fundamental Issue

I once found myself in a local restaurant filled with young professors and graduate students from a nearby university. They were clustered around a long table arguing about the nature of originality in a world where machines could now produce human-like text and code with a few keystrokes. I sat at a small table nearby, eavesdropping.

“I just don’t think it’s right,” said a woman with steel-rimmed glasses. “If you’re using AI to write your paper, you should be honest about it. It’s intellectually dishonest otherwise.”

Her companion, a man with unruly hair and a cardigan stretched at the elbows, shook his head vigorously. “But what about the code you’re writing? Aren’t you using GitHub Copilot? Isn’t that the same thing?”

The question hung in the air between them.

The Contested Border

The border between human creativity and machine assistance has always been contested territory. When the word processor replaced the typewriter, did writers suddenly become less authentic? When compilers made it unnecessary to understand assembly language, did programmers become less skilled? Each technological advancement seems to bring with it a fresh anxiety about the dilution of human agency, a sense that we are somehow cheating if we don’t do things the “hard way”.

I recently visited a friend who works at a technology startup in San Francisco. His office was a converted warehouse with exposed brick and polished concrete floors. The ceiling was high enough that you could fly a small drone inside without hitting anything. Software engineers clustered around monitors, wearing noise-canceling headphones and drinking coffee from biodegradable cups. My friend showed me a tool called Cursor, which allows programmers to describe what they want a program to do in plain English, and then generates the code automatically.

“It’s called ‘vibe coding,'” he explained, showing me the interface. “You sort of… gesture at what you want, and the AI figures out how to make it happen.”

I watched as he typed a simple instruction: “Create a function that calculates the Fibonacci sequence up to the nth term.” The AI responded with a dozen lines of code, neatly formatted and commented. My friend nodded approvingly and made a few small adjustments.

“Did you really program that?” I asked.

He laughed. “Define ‘program.’ I told it what I wanted. It wrote the code. I checked it and made a few tweaks. Is that programming? I don’t know. But I’m still responsible for the end result.”

Tools like Cursor and Windsurf are all the rage lately among software engineers as they provide truly dramatic productivity boosts to those writing code.

The Woodworker’s Tools

The discussion reminded me of a conversation years ago with a group of master woodworkers. They were craftsmen who built furniture by hand, using tools that hadn’t changed much in centuries. I asked one of them, a man with fingers gnarled by decades of work, what he thought about power tools.

“People think using hand tools makes you more authentic,” he said, running his palm along the grain of a maple board. “But the old masters would have used power tools if they’d had them. The point isn’t the tool. It’s what you’re trying to create, and whether you understand what you’re doing.”

He showed me a dovetail joint he’d cut with a table saw and jig. “Is this less authentic because I didn’t use a hand saw? The joint is still tight. The wood is still joined. I still had to understand the properties of the wood and how the joint works.”

Writers and programmers alike are wrestling with similar questions. When does technological assistance become a crutch? When does it become cheating? The novelist who uses a thesaurus is not accused of intellectual dishonesty. The programmer who uses a library of pre-written functions is not condemned for laziness. But something about AI assistance feels different to many people.

The Future of Creation?

Perhaps it’s the speed. A process that once took hours now takes seconds. Perhaps it’s the black-box nature of the technology. We cannot see how the AI arrived at its solution, cannot trace the path of its reasoning. We think they’re just dumb machines probabilistically predicting the next word. Or perhaps it’s simply that we are witnessing a fundamental shift in what it means to create.

My programmer friend has a different perspective. “The future of programming isn’t writing code,” he says. “It’s understanding problems and directing machines to solve them. The code is just an implementation detail.”

I wonder if writers will come to feel the same way. Will the future of writing be less about crafting individual sentences and more about directing AI to capture a particular voice or style? Will we come to see the arrangement of words as merely an implementation detail in the larger project of communication? How does this extend to other fields like film, movies and art?

The Disclosure Dilemma

The question of disclosure remains thorny. Should writers and programmers be required to disclose their use of AI assistance? Some argue that it’s essential for transparency and accountability. Others suggest that it’s no different from any other tool, and that the focus should be on the final product, not the process used to create it.

I think of the woodworker showing me his dovetail joint. “The wood doesn’t care how you cut it,” he said. “It only cares that the joint is tight.”

Perhaps the same is true of writing and programming. Many readers won’t care how the words were arranged, only that they resonate. The software user doesn’t care how the code was written, only that it works.

And yet, there is something deep within us that values the human touch, that finds meaning in the knowledge that another person’s mind and hands shaped the thing we’re experiencing. We want to know that somewhere in the process, a human being made choices, experienced frustration and triumph, poured their unique perspective into the creation.

As I left the restaurant I mentioned earlier the debate at the long table was still going strong. I caught a final snippet as I passed by: “It’s not about the tools,” someone was saying. “It’s about the intention.”

Perhaps that’s the heart of it. Not what tools we use, but how we use them, and why. Not whether we use AI, but whether we use it thoughtfully, with intention and understanding. Not whether we disclose its use, but whether we’re honest about our process, both with ourselves and with others.

There’s no question the AI tools are here and that they’re improving dramatically seemingly every day. They’re providing some powerful leverage to amplify our own skills – if we choose to use them wisely.

Note: this initial idea for this post was mine triggered by listening to a podcast interview with Dan Shipper of Every. I had help fleshing it out using Claude 3.7 from Anthropic. The post began with a couple of paragraphs I wrote. Then I used the following prompt: “You’re an expert writer and editor helping me with my personal blog. Write a 1000 word blog post in the style of John McPhee based on the following initial thoughts…” After that I rewrote portions of Claude’s response to add clarity and emphasis before sharing it here.

Note 2: all of this was done on my iPhone.