Categories
AI Creativity Programming Writing

We Are All Painters Now: The Era of Vibe Coding

For decades, the act of creating software was exactly that: writing. It was a distinctly left-brained, agonizingly precise discipline.

Programmers were typists of logic, translating human intent into a rigid, unforgiving syntax that a machine could understand. A single misplaced semicolon, an unclosed bracket, or a misspelled variable could bring an entire system crashing down.

Building software meant placing one brick after another, working meticulously from the ground up.

In this traditional paradigm, coders were the ultimate embodiment of Annie Dillard’s writer. As she noted in The Writing Life, “Writers… work from left to right. The discardable chapters are on the left.”

When you wrote code, your mistakes, your refactoring, and your discarded logic were all part of a linear, grueling journey. If a feature didn’t work, you had to physically wade back into the text, debugging, reading line by line, and rewriting the narrative of the application. The discarded chapters were the endless hours spent wrestling with a single broken dependency.

But recently, a profound paradigm shift has quietly taken over our screens. We are transitioning out of the era of writing software and into the era of “vibe coding.”

Vibe coding fundamentally changes our relationship with the machine. With the rise of advanced AI coding assistants, we are no longer placing the bricks ourselves; we have become the architects and the creative directors. You don’t write the loop or manually construct the database query. Instead, you describe the feeling, the function, and the outcome. You tell the AI, “Make this dashboard feel more modern,” or “The logic here is too clunky, make it flow faster and handle edge cases gracefully.” You are coding by intuition. You are steering by the “vibe” of the output rather than the mechanics of the input.

Suddenly, Dillard’s other metaphor takes center stage. In the age of vibe coding, we have become painters.

“A painting covers its tracks. Painters work from the ground up. The latest version of a painting overlays earlier versions, and obliterates them.”

When we vibe code, we ask an AI for a functional prototype, and it gives us a canvas. We look at it, test it, and sense whether it aligns with our vision. If it doesn’t quite hit the mark, we don’t necessarily rewrite the code from scratch. We simply prompt the AI to try again, adding a new layer of instruction. The AI paints a new layer of code directly over the old one. The awkward, underlying iterations—the messy attempts at styling, the inefficient logic of the first draft—are obliterated by the newest prompt.

The machine covers our tracks for us. We don’t need to know exactly how the underlying pixels were rearranged or how the syntax was refactored. The final application emerges as a stunning obliteration of its own clumsy past.

As someone who has spent time wrestling with the rigid demands of syntax, there is a strange, quiet grief in letting go of that left-to-right process. There is a deeply earned, tactile satisfaction in building something manually, understanding the precise weight and placement of every line of code. Relinquishing that control can feel like a loss of craftsmanship.

Yet, there is also a breathtaking liberation in this new medium. We are moving from a world of manual construction to a world of artistic curation. The barrier to entry is no longer fluency in a specific, arcane language; it is simply the clarity of your imagination and your ability to articulate your intent.

The next time you sit down to build something digital, notice the shift in your own posture. You no longer have to carry the heavy burden of the writer, agonizing over every word and leaving your discardable chapters on the left. You can step back, look at the whole canvas, and trust your intuition. Let the AI cover the tracks. Embrace the obliteration of the early drafts.

We are all painters now, coaxing the future into existence one brushstroke at a time.

Categories
AI AI: Large Language Models AI: Prompting

Liquid Software and the Death of the “User”

There is a profound disconnect in how we talk about Artificial Intelligence right now. In the boardrooms of legacy corporations, AI is a “strategy” to be committee-reviewed—a tentative toe-dip into efficiency. But on the ground, among the “AI natives,” something entirely different is happening. AI isn’t just making the old work faster; it is fundamentally changing the texture of what we build and how we think.

In a recent conversation, Reid Hoffman and Parth Patil explored this shift, and the metaphor that struck me most was the idea of software becoming “liquid.”

The Era of Liquid Software

For decades, we have treated software like furniture. We buy a CRM, a project management tool, or an analytics dashboard. It is rigid, finished, and distinct from us. We are the users; it is the tool. But Patil demonstrates a different reality: one where he drops a folder of raw CSV files into an agent like Claude Code and asks it to “look at the data and build me a dashboard.”

Sixty seconds later, he has a fully functional, interactive HTML dashboard. He didn’t buy it. He didn’t spend three weeks coding it. He simply willed it into existence for that specific moment.

This is “vibe coding.” It’s a term that sounds almost dismissive, but it represents a radical democratization of creation. You no longer need to know the syntax of Python to build a tool. You just need to know the “vibe”—the outcome you want, the logic of the problem, and the willingness to dance with an intelligent agent until it manifests.

The philosophical implication here is staggering. We are moving from a world of scarcity of capability to a world of abundance of cognition. When you can spin up a custom tool for a single week-long project and then discard it, the friction of problem-solving evaporates. The “app” is no longer a product you buy; it’s a transient artifact you summon.

Applying the “Vibe Code” Mindset

But how do we, especially those of us who don’t identify as “technical,” bridge the gap between watching this magic and wielding it? The conversation offers a roadmap. It starts by shedding the identity of the “user” and adopting the identity of the “orchestrator.”

If you want to move from passive observation to active application, here are three specific ways to start:

1. The “Interview Me” Protocol

We often stare at the blinking cursor, unsure how to prompt the AI. Hoffman suggests a reversal: Make the AI the interviewer. When you face a complex leadership challenge or a strategic knot, open your frontier model (Claude, GPT-4o, etc.) and say:

“Interview me about this problem until you have enough information to propose a framework or solution.”

This forces you to articulate your tacit knowledge, which the AI then structures into something actionable. It turns the monologue into a Socratic dialogue.

2. Build “Throwaway” Internal Tools

Stop looking for the perfect SaaS product for every niche problem in your team. If you have a messy recurring task—like organizing client feedback or synthesizing weekly reports—try “vibe coding” a solution. Use a tool like Replit or Cursor. Upload your messy data (anonymized if needed) and tell the agent:

“Write a script to organize this into a table based on sentiment.”

Don’t worry if the code is ugly. Don’t worry if you throw it away next month. The value is in the immediacy of the solution, not the longevity of the code.

3. Transform Meetings into Data

Meetings are usually where knowledge goes to die. They are ephemeral. But if you transcribe them (with permission), they become data. Don’t just ask for a summary. Feed the transcript to an agent and ask:

“Who should we have consulted on this decision that wasn’t in the room?”
“Create a decision matrix based on the arguments presented.”

This turns a passive event into an active, queryable asset.

Conclusion

The danger, as Hoffman notes, is the “secret cyborg”—the employee who uses AI to do their job in two hours and spends the rest of the week hiding. But the real win comes from the amplified team, where we share these “vibe coded” tools and prompts openly.

We are entering an age where your imagination is the only true constraint. If you can describe it, you can increasingly build it. The question is no longer “is there an app for that?” but “can I describe the solution well enough to bring it to life?”

Categories
AI Creativity Writing

Did You Really Program That?

The Fundamental Issue

I once found myself in a local restaurant filled with young professors and graduate students from a nearby university. They were clustered around a long table arguing about the nature of originality in a world where machines could now produce human-like text and code with a few keystrokes. I sat at a small table nearby, eavesdropping.

“I just don’t think it’s right,” said a woman with steel-rimmed glasses. “If you’re using AI to write your paper, you should be honest about it. It’s intellectually dishonest otherwise.”

Her companion, a man with unruly hair and a cardigan stretched at the elbows, shook his head vigorously. “But what about the code you’re writing? Aren’t you using GitHub Copilot? Isn’t that the same thing?”

The question hung in the air between them.

The Contested Border

The border between human creativity and machine assistance has always been contested territory. When the word processor replaced the typewriter, did writers suddenly become less authentic? When compilers made it unnecessary to understand assembly language, did programmers become less skilled? Each technological advancement seems to bring with it a fresh anxiety about the dilution of human agency, a sense that we are somehow cheating if we don’t do things the “hard way”.

I recently visited a friend who works at a technology startup in San Francisco. His office was a converted warehouse with exposed brick and polished concrete floors. The ceiling was high enough that you could fly a small drone inside without hitting anything. Software engineers clustered around monitors, wearing noise-canceling headphones and drinking coffee from biodegradable cups. My friend showed me a tool called Cursor, which allows programmers to describe what they want a program to do in plain English, and then generates the code automatically.

“It’s called ‘vibe coding,'” he explained, showing me the interface. “You sort of… gesture at what you want, and the AI figures out how to make it happen.”

I watched as he typed a simple instruction: “Create a function that calculates the Fibonacci sequence up to the nth term.” The AI responded with a dozen lines of code, neatly formatted and commented. My friend nodded approvingly and made a few small adjustments.

“Did you really program that?” I asked.

He laughed. “Define ‘program.’ I told it what I wanted. It wrote the code. I checked it and made a few tweaks. Is that programming? I don’t know. But I’m still responsible for the end result.”

Tools like Cursor and Windsurf are all the rage lately among software engineers as they provide truly dramatic productivity boosts to those writing code.

The Woodworker’s Tools

The discussion reminded me of a conversation years ago with a group of master woodworkers. They were craftsmen who built furniture by hand, using tools that hadn’t changed much in centuries. I asked one of them, a man with fingers gnarled by decades of work, what he thought about power tools.

“People think using hand tools makes you more authentic,” he said, running his palm along the grain of a maple board. “But the old masters would have used power tools if they’d had them. The point isn’t the tool. It’s what you’re trying to create, and whether you understand what you’re doing.”

He showed me a dovetail joint he’d cut with a table saw and jig. “Is this less authentic because I didn’t use a hand saw? The joint is still tight. The wood is still joined. I still had to understand the properties of the wood and how the joint works.”

Writers and programmers alike are wrestling with similar questions. When does technological assistance become a crutch? When does it become cheating? The novelist who uses a thesaurus is not accused of intellectual dishonesty. The programmer who uses a library of pre-written functions is not condemned for laziness. But something about AI assistance feels different to many people.

The Future of Creation?

Perhaps it’s the speed. A process that once took hours now takes seconds. Perhaps it’s the black-box nature of the technology. We cannot see how the AI arrived at its solution, cannot trace the path of its reasoning. We think they’re just dumb machines probabilistically predicting the next word. Or perhaps it’s simply that we are witnessing a fundamental shift in what it means to create.

My programmer friend has a different perspective. “The future of programming isn’t writing code,” he says. “It’s understanding problems and directing machines to solve them. The code is just an implementation detail.”

I wonder if writers will come to feel the same way. Will the future of writing be less about crafting individual sentences and more about directing AI to capture a particular voice or style? Will we come to see the arrangement of words as merely an implementation detail in the larger project of communication? How does this extend to other fields like film, movies and art?

The Disclosure Dilemma

The question of disclosure remains thorny. Should writers and programmers be required to disclose their use of AI assistance? Some argue that it’s essential for transparency and accountability. Others suggest that it’s no different from any other tool, and that the focus should be on the final product, not the process used to create it.

I think of the woodworker showing me his dovetail joint. “The wood doesn’t care how you cut it,” he said. “It only cares that the joint is tight.”

Perhaps the same is true of writing and programming. Many readers won’t care how the words were arranged, only that they resonate. The software user doesn’t care how the code was written, only that it works.

And yet, there is something deep within us that values the human touch, that finds meaning in the knowledge that another person’s mind and hands shaped the thing we’re experiencing. We want to know that somewhere in the process, a human being made choices, experienced frustration and triumph, poured their unique perspective into the creation.

As I left the restaurant I mentioned earlier the debate at the long table was still going strong. I caught a final snippet as I passed by: “It’s not about the tools,” someone was saying. “It’s about the intention.”

Perhaps that’s the heart of it. Not what tools we use, but how we use them, and why. Not whether we use AI, but whether we use it thoughtfully, with intention and understanding. Not whether we disclose its use, but whether we’re honest about our process, both with ourselves and with others.

There’s no question the AI tools are here and that they’re improving dramatically seemingly every day. They’re providing some powerful leverage to amplify our own skills – if we choose to use them wisely.

Note: this initial idea for this post was mine triggered by listening to a podcast interview with Dan Shipper of Every. I had help fleshing it out using Claude 3.7 from Anthropic. The post began with a couple of paragraphs I wrote. Then I used the following prompt: “You’re an expert writer and editor helping me with my personal blog. Write a 1000 word blog post in the style of John McPhee based on the following initial thoughts…” After that I rewrote portions of Claude’s response to add clarity and emphasis before sharing it here.

Note 2: all of this was done on my iPhone.