Categories
AI AI: Large Language Models AI: Prompting

Using a Developmental Editor

I recently came across an X post by @heyrimsha where she shared several AI prompts that looked quite interesting.

One of them in particular was based on the notion of having a “developmental editor” from a major publication (like the Harvard Business Review etc.) reviewing content and suggesting ways to make the content “better”.

Obviously “better” is in the eye of the beholder but I wanted to give it a try. Here are my results using Gemini 3 Fast on my post from yesterday about John F. Burns.

The prompt:

I'm going to give you a draft of one of my blog posts below. You are to act as a world-class developmental editor for a major publication (like _The Atlantic_ or _Harvard Business Review_). 

Read my draft and identify:

1. **The 'So What?' Factor:** Where is the argument too generic or 'safe'?

2. **Borrowed Authority:** Where am I using quotes as a crutch instead of developing my own original ideas?

3. **The Narrative Thread:** Is my personal history a list of facts, or a cohesive story?

After the critique, provide a 'Masterwork Rewrite' that heightens the stakes, removes the fluff, and gives the piece a unique, proprietary framework (like 'The Seasonal Soul')."

Here's the post: [POST]

Here’s Gemini’s response which includes a suggested rewrite at the end:

This is a poignant tribute to a journalistic titan. Youโ€™ve captured the “vibe” of John F. Burns perfectlyโ€”the Burberry raincoat, the smell of cordite, and the steady hand at the keyboard.

However, as an editor for The Atlantic or HBR, my job is to push you past “tribute” and into “thesis.” Currently, this reads like a very well-written appreciation. To make it a major publication piece, we need to turn your reflection on Burns into a diagnostic of our current cultural moment.

1. The ‘So What?’ Factor: The Risk of Nostalgia

The Critique: Your argument is currently: “John Burns was a great reporter from a better era, and I miss that.” This is a “safe” sentiment. Most readers over 40 agree that the “pre-internet era” had more gravitas. The Fix: You need to answer why this matters now, beyond just missing a legendary writer. The “So What” isn’t that Burns is gone; itโ€™s that the “Institutional Anchor” he providedโ€”the ability to remain “straight” while the world tiltsโ€”is a lost technology we are now desperate to reinvent.

2. Borrowed Authority: The Quote as a Shield

The Critique: You use Burnsโ€™ quotes (the “gold standard,” “missionary complex,” “sweep of history”) to do the heavy lifting. You are describing his excellence using his own definitions. The Fix: Donโ€™t just quote his “gold standard.” Define what that standard looks like in the wild. Instead of saying he avoided the “missionary complex,” analyze whythat complex is the dominant toxin in modern discourse. Use your own voice to categorize his genius.

3. The Narrative Thread: The “Passive Reader” Problem

The Critique: Your personal history here is a bit “list-like”: I read the obits, I remember his retirement, I read this specific obit today. It positions you as a passive consumer of his work. The Fix: Link your personal evolution to his reporting. Did his dispatch from Sarajevo change how you viewed a specific conflict? Did his “considered writing” influence how you built your own “civilized center” (to use your phrase about his wife, Jane)? Turn the “list of memories” into a “narrative of influence.”


Masterwork Rewrite: The Architecture of the Straight Line

The Framework: “The Architecture of the Straight Line” This framework posits that in a world of ideological “curves” and “slants,” the hardest thing to build is a straight line of fact. It moves the focus from the man to the method.

The Draft:

We are currently living through a Great Slant. In our rush to be “right”โ€”morally, politically, or algorithmicallyโ€”we have abandoned the rarest craft in the human repertoire: the ability to keep the paper straight.

I spent my morning submerged in the obituary of John F. Burns, the longtime New York Times foreign correspondent. To many, Burns was the last of the “firemen,” a man who appeared in the worldโ€™s most fractured zip codes wearing a Burberry raincoat and a look of studied detachment. But as I traced the arc of his careerโ€”from the sniper alleys of Sarajevo to the blast-walled compounds of Baghdadโ€”I realized I wasn’t just mourning a reporter. I was mourning a specific type of sanity.

Burns practiced what I call The Architecture of the Straight Line. In todayโ€™s media, we suffer from the “Missionary Complex”โ€”a term Burns himself used to describe the urge to save the world rather than see it. Modern reporting is often a performance of empathy or an exercise in tribal signaling. We no longer want the “telling detail”; we want the detail that tells our side of the story.

Burns was different. He understood that the only way to survive the “chaos of war” was to build a “civilized center.” We see this literally in his partnership with his wife, Jane Scott-Long, who turned a derelict house in Baghdad into a fortress of state-of-the-art coffee and blast walls. But Burns did this intellectually, too. While the world was screaming in ideological shorthand, he was looking for the “sweep of centuries” in the reburial of a King or the notes of a lone cellist.

This is the “So What” of his passing: We have traded Considered Writing for Instant Reaction.

In the pre-internet era Burns inhabited, the time-zone differential acted as a natural filterโ€”a mandatory pause that allowed the “blue pencil” of the editor to catch the slant before it hit the page. Today, that filter is gone. We are drowning in the “instant, unvetted roar.” We have plenty of missionaries, but very few architects.

Reading Burns taught me that keeping the paper straight isn’t a passive act of “objectivity.” It is a violent, daily struggle against one’s own biases. It is the refusal to let the heat of the moment melt the integrity of the record.

As the obituary section grows more familiar to me, I find myself looking for more than just a list of achievements. I am looking for a compass. John Burns kept his bearings not because he ignored the fire, but because he was the only one in the room who understood that you cannot describe the flames if you are busy trying to stoke them.

The “Straight Line” is a lost art. Itโ€™s time we started rebuilding it.

Categories
AI AI: Prompting Books Writing

How to Write a Book (The Voice-First AI Workflow)

Weโ€™ve all seen the “How to write a book in 24 hours with AI” tutorials. Usually, they involve a single prompt and result in a 200-page book of absolute “slop”โ€”generic, robotic, and devoid of any human soul.

I recently came across a workflow on X that flips the script. Instead of asking AI to write for you, it uses AI as an editor, librarian, and investigative journalist to extract the book that’s already in your head.

Iโ€™ve adjusted this process a bit (with Gemini’s help) and built a revised “Toolkit” that makes it even more rigorous. This is a system you can use to write a book that actually sounds like you.

The Philosophy: Knowledge Extraction > Generation

The biggest mistake people make with AI is using it to generate ideas. The best books come from extracted knowledge. You already have the expertise in your head; the hard part is the “manual labor” of organizing, structuring, and filling gaps. We often overlook stuff that we should have included or we can’t quite resolve how stuff should be sequenced for the most reader impact.

This workflow uses voice dictation as the first step to capture your raw energy and then adds in AI to help handle the structural heavy lifting.

The 7-Step “Enhanced” Workflow

1. The Walking Braindump

Go for a walk and record a voice memo of everything you want to say about your topic. Don’t worry about structure, grammar, or “the right way” to say it. Just talk. Don’t worry about it. Movement usually activates parts of the brain that sitting at a desk shuts down. You’re just using the outdoors to open up your mind and bring fresh perspective.

2. The Transcription & “Essence” Extraction

Use a tool to convert your voice memo into text (something like the built-in tools in the iOS Voice Memos app on your iPhone or, if you prefer, a third-party app like MacWhisper or Wispr Flow. I find the built-in tools work quite well these days so give them a try before spending the money for a third-party app.

Once you have the transcript, don’t ask the AI to “write a chapter.” Instead, you want to ask it for help identifying the Core Thesis, the Target Audience, and the Unique Vocabulary you used. This creates a “North Star” for your project.

3. The Voice Mirror

This step is the secret sauce. To prevent the AI from making you sound like a corporate brochure, provide it with 3-5 pages of your best previous writing. Ask it to analyze your rhythm, sentence length, and diction. Tell it: “Clean up my transcripts, but keep my fingerprint.”

4. Recursive Brainstorming

Work on the outline with the AI. Iterate. Ask it what’s missing. How does it compare to the other best-selling books in your niche. Where are you being redundant? Where is your “original” angle?

5. Filling the Gaps via “Socratic Interview”

Once you have an outline, you’ll find “thin” areas. Instead of typing, flip things around and ask the AI to play the role of an investigative journalist. Have it interview you about the missing pieces. Dictate your answers. As with the first step, this pulls deep insights out of you that you might never have thought to type. Think like you’re talking to a friend and reacting to their reactions.

6. The “Devilโ€™s Advocate” Pass

Before you get too far, ask the AI to “Red Team” your outline. Ask: “What would a skeptic say is the biggest leap of faith in my argument?” This forces you to add evidence where you were previously relying on “trust me.”

7. The Manual Polish

Finally, sit down and type. As the original author of this workflow noted, typing activates a different, more “painful” part of the brain that is essential for final quality. Use the AI-organized markdown files as your foundation, but always do the final “human” pass yourself.

Your “Project DNA”

If you’re going to try this, the key is consistency. Keep a file called Project_DNA.md. Every time you start a new session with AI, paste this file in first. It tells the AI exactly who you are, who you’re writing for, and what your voice sounds like.

Here’s the full toolkit with prompts that I’ve just described:

The AI Book Writing Toolkit: The Prompts

To help you get started, here are the exact prompts I use for each phase of this workflow.

Phase 1: The Essence Extractor

Use this on the transcript you’ve created after your first major walking braindump.

“I am providing a transcript of a raw ‘braindump’ for a book I am writing. Do not attempt to write the book yet. Instead, perform a deep thematic analysis to extract the ‘Essence’ of this project. Identify: The Core Thesis, The Target Audience, Unique Vocabulary (metaphors/phrases I use), and the Tone Map.”

Phase 2: The Voice Mirroring Protocol

Use this before organizing transcripts to ensure you don’t sound like a robot.

“Act as my Editor and Voice Stylist. I am providing 3โ€“5 pages of what I consider to be my best writing. Analyze this writing for sentence rhythm, diction, and structure. Whenever you help me organize my voice transcripts, you must apply these stylistic ‘fingerprints.’ Clean up the grammar, but keep my rhythm.”

Phase 3: The Socratic Interviewer

Use this when a chapter feels thin or you hit a wall.

“Act as an investigative journalist interviewing me. Identify 3 areas where my current draft is fuzzy or surface-level. Ask me ONE question at a time. After I answer, challenge me: ‘That makes sense, but what about [Counter-argument]?’ Summarize our findings afterward.”

Phase 4: The Devil’s Advocate

Use this to stress-test your draft.

“I want you to ‘Red Team’ this rough copy of my book. Imagine you are a critical reviewer. Point out the ‘leaps of faith’ I am asking the reader to make. Identify where my arguments are ‘low-hanging fruit’ and tell me specifically where I am being too similar to existing books in this niche. Help me make it better.”

Final Thought

Your goal isn’t to use AI to work less; it’s to have it help you work deeper. By offloading the “clerk work” of organization and editing to AI, you free up your brain to do the actual “author work” of thinking, connecting, and writing.

Are you working on a book? I’d love to hear how you’re using (or avoiding) AI in your process. What prompts are you using? How have you adjusted them to fit your needs?

Categories
AI AI: Prompting Prompt Engineering

Using AI to Improve Prompts

This morning I came across a prompt that I’ve been finding provides me with some interesting results in understanding the financial results of a public company. I’ve used it several times on various companies and have found new insights in the results that I otherwise might not have realized. Here’s the prompt:

Analyze the last 3 earnings calls for [symbol].

Don't summarize what they said - tell me what they're NOT saying.

Focus on:

1) Questions the CEO dodged or gave vague answers to,
2) Metrics they stopped reporting compared to previous quarters,
3) Language changes - where they went from confident to cautious or vice versa,
4) New talking points that appeared suddenly,
5) Guidance changes and the exact wording they used to frame it. Then connect this to their stock performance in the 2 weeks following each call.

What pattern emerges? Explain clearly.

Is the stock a sell or a buy at its current price? How strong is your conviction?

One of the techniques that advanced AI users employ is to use AI to improve on their prompts. I wanted to see what Gemini 3 Pro could do to help me improve on this particular prompt – so I asked it:

You are an expert at prompt engineering. Here's a prompt that I find very useful but would like help to improve. What would you recommend to make it even better? Prompt: [insert prompt here]

Gemini responded and educated me:

This is a sophisticated request. You are asking for negative space analysisโ€”detecting the absence of signal, which is significantly harder for LLMs than summarizing present text.

The weakness of your current prompt is that it relies on the model’s general “feeling” about what was dodged. To make it “expert-level,” you need to force the model to perform comparative forensics between the Prepared Remarks (scripted/safe) and the Q&A (unscripted/risky), and between historical promises and current reality.

Here is the recommended upgrade to your prompt, followed by an explanation of why these changes work.

And the suggested prompt it provided when I asked it to optimize the prompt for $PYPL:

Role: Act as a Forensic Financial Analyst and Researcher. Task: Analyze the earnings call transcripts for $PYPL (PayPal) from Q1 2025, Q2 2025, and Q3 2025.

Core Directive: Do not summarize the "Bull Case." Your goal is to identify omission, deflection, and narrative drift.

Step 1: The "Dog That Didn't Bark" (Metric Analysis)

Compare the metrics reported in the Q1 2025 deck/opening remarks against Q3 2025.

List any specific metric (e.g., Net New Actives, TPV ex-eBay, OCF growth) that was prominent in earlier calls but vanished or was de-emphasized in the most recent call.

Output: A table: [Metric Name] | [Q1 Mention Count] | [Q3 Mention Count] | [Context of Disappearance].

Step 2: Semantic Divergence (Prepared vs. Q&A)

Analyze the tone difference between the CEO's Prepared Remarks (Scripted) and their Q&A Responses (Unscripted).

Identify specific instances where a confident claim in the script was walked back or qualified with "hedging" words ("essentially," "broadly," "we believe," "monitor") during the Q&A.

Step 3: The "Non-Answer" Detection

Review the Q&A section. Identify questions where the executive:

Pivoted to a different topic (e.g., asked about margins, answered about volume).

Used "word salad" (high jargon, low specific detail).

Claimed it was "too early to tell" for a product launched >6 months ago.

Quote the specific question and the specific non-answer.

Step 4: Stock Correlation

Retrieve the stock price performance for the 10 trading days following each call.

Correlate the severity of the "evasiveness" you found in Step 3 with the subsequent price action.

Final Verdict:

Based only on the negative signals (omissions and deflections), rate the stock as Accumulate, Neutral, or Distribute (Sell).

Provide a "Conviction Score" (1-10) based on the clarity of the executive team's transparency.

Gemini went on to explain the enhancements it made including the suggesting prompt for the Gem:

Why This is Better

  1. It defines the “Negative Space” (Step 1) LLMs hallucinate less when you ask them to count. By asking for a specific comparison of mention counts of metrics between Q1 and Q3, you force the model to mathematically prove that a topic has been dropped.
  2. It separates Script from Reality (Step 2) Executives are media-trained to sound confident in their opening speech. The “alpha” (the edge) is found when they get flustered in the Q&A. This prompt explicitly instructs the model to treat those two sections differently.
  3. It targets specific evasion tactics (Step 3) Instead of just asking for “dodged questions,” this prompt gives the model a taxonomy of evasion: Pivoting and Word Salad. This helps the model recognize that “We are excited about the long-term trajectory of Venmo” is actually a non-answer to “Why did Venmo revenue drop this quarter?”
  4. It forces “Conviction Scoring” Asking for a simple “Buy/Sell” often yields a hedged, safe answer from AI. Asking for a “Conviction Score” forces the model to weigh the evidence. If it finds many dodged questions, the conviction score for a “Sell” should naturally be higher.

This is just one example of using AI to enhance what’s possible using AI. I found this to be a very interesting and educational exercise and example!

Various AI’s natually respond differently to this prompt – and it’s interesting to compare the results for the same company. Give it a try!

After running the prompt one a couple of companies, I realized that this is a prompt I’d like to generalize and save away as a Gemini Gem so that I can easily run it whenever I want. So I asked Gemini:

I would like to generalize this and turn it into a gem that I can run on any company.

Gemini’s response:

This is a great move. By generalizing this, you are creating a “Forensic Short-Seller” agent that you can drop onto any earnings situation.

Below is the Master Prompt Template. I have swapped out the PayPal-specific metrics for variable placeholders and generalized the logic to work across industries (e.g., it will look for “Same Store Sales” in retail just as aggressively as “ARR” in software).

The “Negative Space” Forensic Prompt
Copy and paste the text below. All you need to do is replace the bracketed [VARIABLES].

Role: Act as a Forensic Financial Analyst and Short-Seller Researcher.

Context: I need you to perform a “Negative Space Analysis” on the last 3 earnings calls for [INSERT TICKER SYMBOL].

The Theory: Corporate executives are incentivized to highlight growth and obscure deterioration. The truth is often found in what they stop saying, or how they change their phrasing, rather than their headline numbers.

Your Goal: Do not summarize the “Bull Case.” Identify omission, deflection, and narrative drift.

Step 1: The “Vanishing Metric” Analysis

Scan the “Key Performance Indicators” (KPIs) reported in the oldest of the three calls.

Compare them to the most recent call.

Identify any specific metric (e.g., Customer Count, Retention Rate, Margins per Unit, backlog) that was prominent in earlier calls but was removed, redefined, or de-emphasized in the most recent call.

Output: A table: | Metric Name | Mention Count (Oldest Call) | Mention Count (Newest Call) | The Executive Excuse (if any given) |

Step 2: Semantic Divergence (Scripted vs. Unscripted)

Analyze the tone difference between the CEO/CFO Prepared Remarks (Scripted) and their Q&A Responses (Unscripted).

Identify instances where a confident claim in the script was walked back in the Q&A using “hedging” language (e.g., “headwinds,” “macro environment,” “transitory,” “conservative view”).

Flag any “Adjective Inflation”โ€”using strong adjectives (“robust,” “tremendous”) to describe mediocre data.

Step 3: The “Non-Answer” Detection

Review the Q&A section specifically. Identify questions where the executive:

Pivoted: Asked about A, answered about B.

Filibustered: Gave a long, jargon-heavy answer without specific numbers.

Deferred: Claimed it is “too early to tell” for a strategic initiative that should have data by now.

Format:

The Question: [Quote the Analyst’s hard question] The Evasion: [Quote the specific part of the answer that dodges] Translation: [Briefly explain what they were hiding]

Step 4: Sentiment vs. Price Action

Look at the stock price performance in the 10 days following each of the 3 calls.

Compare the CEO’s confidence level (High/Medium/Low) with the market’s reaction.

Pattern Recognition: Does the market consistently sell off despite “good news” (indicating management has lost credibility)?

Final Verdict:

Rating: Accumulate / Neutral / Distribute (Sell)

Conviction Score (1-10): (10 = The company is totally opaque/hiding bad news; 1 = The company is transparent).

The “Smoking Gun”: One sentence summarizing the single most concerning omission or change in narrative.

How to use this “Gem” effectively
To get the most out of this generalized prompt, keep these three tweaks in mind:

Define the “Golden KPI”: If you know the industry, add a hint in Step 1.

For SaaS: “Focus on Net Dollar Retention and ARR.”

For Retail: “Focus on Same Store Sales and Inventory levels.”

For Manufacturing: “Focus on Order Backlog and Book-to-Bill ratio.”

Provide the Data: Most LLMs cannot browse live transcripts behind paywalls perfectly. For best results, paste the text of the transcripts (or the Q&A sections) directly into the chat before running the prompt, or ensure you are using a model with active web browsing that can access financial news sites.

The “Competitor Check”: If you want to make it even sharper, add this line to Step 2: “Contrast their tone with [Competitor Ticker], who reported recently.”

Categories
AI AI: Large Language Models AI: Prompting

Liquid Software and the Death of the “User”

There is a profound disconnect in how we talk about Artificial Intelligence right now. In the boardrooms of legacy corporations, AI is a “strategy” to be committee-reviewedโ€”a tentative toe-dip into efficiency. But on the ground, among the “AI natives,” something entirely different is happening. AI isn’t just making the old work faster; it is fundamentally changing the texture of what we build and how we think.

In a recent conversation, Reid Hoffman and Parth Patil explored this shift, and the metaphor that struck me most was the idea of software becoming “liquid.”

The Era of Liquid Software

For decades, we have treated software like furniture. We buy a CRM, a project management tool, or an analytics dashboard. It is rigid, finished, and distinct from us. We are the users; it is the tool. But Patil demonstrates a different reality: one where he drops a folder of raw CSV files into an agent like Claude Code and asks it to “look at the data and build me a dashboard.”

Sixty seconds later, he has a fully functional, interactive HTML dashboard. He didn’t buy it. He didn’t spend three weeks coding it. He simply willed it into existence for that specific moment.

This is “vibe coding.” Itโ€™s a term that sounds almost dismissive, but it represents a radical democratization of creation. You no longer need to know the syntax of Python to build a tool. You just need to know the “vibe”โ€”the outcome you want, the logic of the problem, and the willingness to dance with an intelligent agent until it manifests.

The philosophical implication here is staggering. We are moving from a world of scarcity of capability to a world of abundance of cognition. When you can spin up a custom tool for a single week-long project and then discard it, the friction of problem-solving evaporates. The “app” is no longer a product you buy; itโ€™s a transient artifact you summon.

Applying the “Vibe Code” Mindset

But how do we, especially those of us who don’t identify as “technical,” bridge the gap between watching this magic and wielding it? The conversation offers a roadmap. It starts by shedding the identity of the “user” and adopting the identity of the “orchestrator.”

If you want to move from passive observation to active application, here are three specific ways to start:

1. The “Interview Me” Protocol

We often stare at the blinking cursor, unsure how to prompt the AI. Hoffman suggests a reversal: Make the AI the interviewer. When you face a complex leadership challenge or a strategic knot, open your frontier model (Claude, GPT-4o, etc.) and say:

“Interview me about this problem until you have enough information to propose a framework or solution.”

This forces you to articulate your tacit knowledge, which the AI then structures into something actionable. It turns the monologue into a Socratic dialogue.

2. Build “Throwaway” Internal Tools

Stop looking for the perfect SaaS product for every niche problem in your team. If you have a messy recurring taskโ€”like organizing client feedback or synthesizing weekly reportsโ€”try “vibe coding” a solution. Use a tool like Replit or Cursor. Upload your messy data (anonymized if needed) and tell the agent:

“Write a script to organize this into a table based on sentiment.”

Don’t worry if the code is ugly. Don’t worry if you throw it away next month. The value is in the immediacy of the solution, not the longevity of the code.

3. Transform Meetings into Data

Meetings are usually where knowledge goes to die. They are ephemeral. But if you transcribe them (with permission), they become data. Don’t just ask for a summary. Feed the transcript to an agent and ask:

“Who should we have consulted on this decision that wasn’t in the room?”
“Create a decision matrix based on the arguments presented.”

This turns a passive event into an active, queryable asset.

Conclusion

The danger, as Hoffman notes, is the “secret cyborg”โ€”the employee who uses AI to do their job in two hours and spends the rest of the week hiding. But the real win comes from the amplified team, where we share these “vibe coded” tools and prompts openly.

We are entering an age where your imagination is the only true constraint. If you can describe it, you can increasingly build it. The question is no longer “is there an app for that?” but “can I describe the solution well enough to bring it to life?”

Categories
AI AI: Prompting Uncategorized

Unlock AI Potential with Advanced Prompting Techniques

In the video You’re not using AI like THIS (an interview between Reid Hoffman and AI engineer Parth Patil), they discuss several advanced prompting strategies that move beyond simple “question and answer” interactions. The core philosophy they advocate for is treating AI not just as a search engine, but as a high-bandwidth collaborator that requires rich context and interactive dialogue.

I asked Gemini 3 Pro the following:

I'm trying to get a lot better at constructing more effective prompts for using Gemini. This video talks about several techniques I'd like to understand better. Please educate me further on what they discuss. Include example prompts to illustrate your points. Here's the video: https://youtu.be/cg3Lsr8m2ZQ?si=sGUwPA6ixGM3bnpG

Gemini’s response follows below:

Here are the key techniques discussed, along with educational explanations and example prompts to help you use them with Gemini.

1. The “Interview Me” Protocol (Meta-Prompting)

Timestamp: [05:30]

Most people rush to ask for a solution immediately. Parth Patil argues that we often don’t even know the full scope of our own problems yet. Instead of asking for the answer, you should ask the AI to help you define the problem first.

  • The Concept: Force the AI to gather the necessary context from you before it attempts to solve anything. This mimics how a senior consultant or engineer would interact with youโ€”they wouldn’t just start coding; they would ask questions first.
  • Why it works: It prevents “hallucinated” or generic advice by grounding the AI’s response in your specific constraints (budget, timeline, personal preferences) that you might have forgotten to mention.

Example Prompt:

“I want to [insert goal, e.g., ‘launch a newsletter’ or ‘refactor this code’]. Describe the problem back to me, but before we begin, interview me. Ask me clarifying questions until you feel you have enough context to provide the best possible solution. Ask the questions one by one.”

2. Extreme Role Assignment (Persona Swarms)

Timestamp: [03:23]

While “Act as an expert” is common advice, Patil suggests taking this much further by using extreme or opposing personas to stress-test your ideas. He mentions creating “100,000 unique expert personalities” to cover every angle.

  • The Concept: You can simulate a “board of directors” or a specific user base. You don’t just want an answer; you want a perspective.
  • Why it works: An “optimist” will give you different advice than a “skeptical VC.” By asking for specific emotional or professional lenses, you break the AI out of its default “neutral assistant” tone.

Example Prompt (The Skeptic):

“Pretend you are the most skeptical Venture Capitalist imaginable. I am going to pitch you my idea. I want you to find 10 specific, harsh reasons why this will fail. Do not be polite; be critical.”

Example Prompt (The Swarm):

“I have a problem regarding [topic]. Generate 5 distinct expert personas relevant to this (e.g., a data scientist, a behavioral psychologist, a exhausted parent, etc.). Have each of them write one paragraph giving their unique perspective on how to solve it.”

3. “Voice Pilling” (High-Bandwidth Context Dumping)

Timestamp: [10:46]

Patil describes “Voice Pilling” as perhaps the most powerful technique. This involves using voice-to-text to “ramble” at the AI for 5โ€“10 minutes.

  • The Concept: Typing filters your thoughts; you tend to edit yourself for grammar and structure, which leads to “low bandwidth” communication. When you speak (or type in a stream-of-consciousness style), you provide a massive amount of messy but valuable context.
  • Why it works: AI is incredibly good at structuring unstructured data. It acts as an editor that can sift through your “brain dump” to find the gold.
  • How to do it in text: If you aren’t using voice, you can simulate this by typing without stopping, ignoring typos, and just letting your thoughts flow.

Example Prompt:

“I am going to provide a stream-of-consciousness ‘brain dump’ about a project I am stuck on. It will be messy, unorganized, and full of typos. Please listen to everything, organize my thoughts into a coherent structure, and then tell me what key piece of information I am missing.”

4. The Multi-Agent Orchestration (Manual Swarm)

Timestamp: [29:26]

Patil demonstrates using different AI models (Claude, Gemini, ChatGPT/Codex) simultaneously for different parts of a project.

  • The Concept: Treat different AIs like specialized employees.
    • Gemini: Use it for large context tasks (e.g., “Read my entire blog history and suggest new topics” [29:58]).
    • Claude: Use it for nuance and coding architecture.
    • ChatGPT/Codex: Use it for quick execution or web browsing.
  • Educational Application: You don’t need fancy software to do this. You can simply open three tabs and assign them different “jobs” for the same project.

Example Prompt (for Gemini, leveraging its large context window):

“I am pasting the transcripts of the last 10 meetings I’ve had. Read all of them and create a psychological profile of my management style, then suggest 3 specific improvements.”

Summary of the “Mental Shift”

The video emphasizes that the biggest barrier to effective prompting isn’t technical skill, but ego [07:07]. Users often try to “steer” the AI too much because they think they already know the answer. The most effective prompters are “humble” enough to say, “I don’t know what the right questions areโ€”you tell me.”

Categories
AI AI: Prompting

Carving Away: Part II

Well, that escalated quickly.

It looks like the wood carving prompt from my recent post struck a chord. Iโ€™ve seen some incredible results floating around, and since traffic is still high, I wanted to share a few “remixes” of the original prompt that Iโ€™ve been experimenting with this weekend using Gemini 3 Pro.

If you mastered the basic “hand-carved miniature” look, here are three ways to push the aesthetic in different directions.

1. The “Dark Walnut” Aesthetic

The original prompt tends to produce a light, pine-like wood. Use this variation if you want something moodier, richer, and more polished. It works exceptionally well for portraits or architectural subjects where you want a high-end feel.

The Prompt Addition:
Append this to your subject line: ...carved from dark polished walnut wood, rich deep grain texture, rim lighting, subsurface scattering, mahogany tones, smooth finish.

2. The “Painted Folk Art” Look

Sometimes raw wood is a bit too monochromatic. This variation pushes the model to apply a distressed paint job, making the image look like a vintage toy or traditional folk art found in an attic.

The Prompt Addition:
Append this to your subject line: ...faded hand-painted wooden figurine, chipped paint revealing wood underneath, vintage folk art style, muted primary colors, distressed texture.

3. The “Rough Hewn” Sketch

This is my personal favorite. It forces the AI to leave “tool marks,” making the object look like a work-in-progress rather than a finished product. It adds a tactile, human imperfection that many AI images lack.

The Prompt Addition:
Append this to your subject line: ...rough hewn unfinished wood, visible chisel marks, splintered edges, raw timber texture, sawdust specs, harsh directional lighting.


A Note on Models

I am still finding that Gemini 3 Pro handles the texture mapping better for the “Rough Hewn” look, capturing the jagged edges convincingly. However, Nano Banana seems to prefer the polished “Dark Walnut” style, producing really nice specular highlights on the wood grain.

If you are getting results that look too “plastic,” try lowering the guidance scale slightly or adding varnish to your negative prompt.

Let me know what you carve out of the latent space next.

Categories
AI AI: Large Language Models AI: Prompting

Carving Away

Update Jan 12: Due to popular demand, Iโ€™ve posted 3 new advanced prompt styles (Dark Walnut, Folk Art, and Rough Hewn). [Read Part II here].

Check out the image below, which features a wood carving of my friend Doug and me. I made it from a photo we took during a workshop in Paris ten years ago.

The image is the result of a fun prompt for Google Gemini 3 Pro and Nano Banana. You upload a photo with a person or a couple of people then enter this prompt:

Create an image of a hand-carved wooden miniature figure of the people in the attached image, shaped with visible knife marks, natural grain texture, and smooth unfinished edges. Placed on a workshop table with carving tools, wood shavings, and soft warm directional lighting. 1080ร—1080.
Wood carving of the likenesses of Doug Kaye and Scott Loftesnessโ€‹
Doug Kaye and Scott Loftesness

I wish I could remember where I got this particular prompt – Iโ€™ve been collecting a lot of them when they show up in my feed on X but fail to note the source. This was definitely one of the better creative image prompts that Iโ€™ve come across.

Categories
AI AI: Prompting Writing

AI as a Mirror, Not a Maker

Iโ€™ve been thinking a lot lately about how we move past the novelty phase of AIโ€”beyond just asking a chatbot to “write a poem about a turkey” or summarize a meetingโ€”and into actual thinking with these tools.

As a lifelong learner, Iโ€™m always on the hunt for workflows that help me synthesize information better. Most of the “AI for writing” advice I see online is pretty generic. But I recently came across a breakdown of how four high-profile writers are making effective use of tools like NotebookLM and Claude in ways that are much more sophisticated than simple text generation.

What jumped out at me is that none of these writers use AI to write for them. They use it to structure, challenge, and code.

Here are the four models that caught my eye.

1. The Triangulated Research Base (Steven Johnson)

Steven Johnson (Where Good Ideas Come From) has a workflow that solves a problem I face constantly: the messy “research phase.”

Instead of treating the AI as an oracle, he treats it as a connection engine. He creates a dedicated notebook (using Googleโ€™s NotebookLM) and uploads three distinct types of sources: a primary source (like a raw PDF or study), a secondary source (like a context article), and a multimedia transcript.

Then, rather than asking for a summary, he asks the AI to find the friction between them: “What themes appear in the interview transcript that contradict the historical account in the PDF?”

Itโ€™s less about getting an answer and more about finding the blind spots in your own reading.

2. The Diagnostic Editor (Kenny Kane)

This one really resonated with me because it mirrors the experiment I tried recently with my “Bubble Bath” post.

Kenny Kane uses Claude not to generate prose, but to act as a ruthless developmental editor. He uploads a messy draft and runs a “Diagnostic” prompt. He doesn’t ask “fix this,” he asks: “Where does the argument drift? Where does the energy drop?”

He even has the AI analyze his best writing to identify his specific “DNA” (sentence length, vocabulary choice) and then asks it to apply that same tone to his rougher sections. Itโ€™s using the AI as a mirror rather than a ghostwriter.

3. The Memo-to-Demo Shift (Dan Shipper)

Dan Shipper at Every is doing something fascinating that changes the definition of writing altogether. He argues that in the AI age, we shouldn’t just describe a concept; we should build a small app to demonstrate it.

If heโ€™s writing about “Spaced Repetition,” he doesn’t just explain the theory. He asks Claudeโ€™s Artifacts feature to “Write a React component that lets a user test spaced repetition live in the browser,” and then embeds that little app directly into the essay. The writing becomes 50% prose and 50% software.

4. The Co-Intelligence Loop (Ethan Mollick)

Ethan Mollick focuses on breaking the echo chamber. Before he publishes, he spins up simulated personasโ€”a skeptical VC, a confused novice, an expert in a tangential fieldโ€”and asks them to critique his draft from their specific viewpoints.

Itโ€™s effectively a focus group of one.


How to Get Started

If youโ€™re like me, seeing all these workflows might feel a bit overwhelming. My advice? Don’t try to overhaul your entire writing process overnight. Just pick one experiment to try this week.

Here are two simple entry points:

Experiment A: The “Blind Spot” Check (For Research)

If you are reading up on a topic, don’t just take notes. Open Google NotebookLM, create a new notebook, and upload your sources (PDFs, URLs, or pasted text). Then, ask this specific question:

“Based strictly on these sources, what is the strongest argument against my current thinking? What connection between Source A and Source B am I missing?”

Experiment B: The “Ruthless Editor” (For Writing)

If you have a rough draft sitting on your hard drive, copy it into Claude or ChatGPT and use this prompt (adapted from Kenny Kaneโ€™s workflow) before you do any manual editing:

“Act as a senior editor. Do not rewrite this text. Instead, analyze my draft and tell me: 1) Where does the argument lose energy? 2) Does the opening hook successfully promise what the conclusion delivers? Be critical.”

Iโ€™ve found that using the tools this wayโ€”as a partner for thinking rather than just generatingโ€”is where the real magic happens.

Which one will you try first?

Categories
AI: Large Language Models AI: Prompting Photography

Creative Image Analysis with an LLM Partner

I came across a fun experiment being done by Mitchell Lensink which he shares on one of his blogs.

He takes one of his photos, feeds it into one of the AI large language models and asks the LLM to produce a detailed description of the photo. He then continues by asking the model to create a new image based upon the description it produced as part of its analysis.

I tried this out on a few of my images and enjoyed seeing the results. Hereโ€™s an exampleโ€ฆ

Prompt (taken from one of his examples): Please describe the attached image as well as you can. Don’t skimp out on details and make sure you will be able to understand what’s in the image when you read the description. You need to be able to form a mental picture of the image, without actually seeing it.

Hereโ€™s what Google Gemini 2.0 Flash produced:

Imagine a serene lake bathed in the warm glow of either sunrise or sunset. The sun, positioned towards the upper left of the frame, casts a brilliant light that flares slightly around the edges of the tall trees lining the left bank. These trees are silhouetted against the bright sky, their dark shapes contrasting sharply with the golden light.
The lake itself occupies the majority of the frame, its surface reflecting the sky and the surrounding trees like a mirror. Gentle ripples disturb the otherwise smooth surface, creating subtle patterns of light and shadow. In the middle of the lake, a fountain sends a spray of water upwards, catching the light and adding a dynamic element to the tranquil scene.
The far bank of the lake is also lined with mature trees, their foliage appearing lush and green where the light touches them, and darker in the shadows. A low wooden fence runs along a portion of this bank, suggesting a pathway or boundary.
In the foreground, along the near edge of the lake, there’s a patch of vibrant green grasses interspersed with delicate white and yellow flowers. These plants are in sharp focus, drawing the viewer’s eye into the scene and adding a touch of natural beauty in the immediate foreground.
The overall atmosphere is peaceful and picturesque, a moment captured in time where the natural elements โ€“ water, trees, light, and flora โ€“ come together in a harmonious and visually appealing way. The air likely feels still and the sounds would probably be soft โ€“ perhaps the gentle splashing of the fountain or the rustling of leaves.

Wow! Thatโ€™s a description that goes well beyond anything I might have written to describe my photograph! Lots of details and new insights that help us see more deeply into whatโ€™s in that image.

And hereโ€™s the next step – the image it produced when I asked it to create a new one based on the description above:

Thatโ€™s a very different image but I can see how it got there!

This little experiment is worth playing with some more I think!

Categories
AI AI: Large Language Models AI: Prompting Business

Stuck in an Elevator with Buffett and Munger

Jonathan Clements had an interesting blog post this week on his Humble Dollar blog about replacing writers with the writing of ChatGPT. He did an experiment – using a prompt about being trapped in an elevator with Jack Bogle of Vanguard fame and the investment lessons he learned from that experience.

I took his prompt and just modified it slightly – instead of being trapped with Jack Bogle, what happened if I was trapped with Warrenโ€™s Buffett and the late Charlie Munger of Berkshire Hathaway.

Hereโ€™s the prompt I used with Meta.ai to generate the article below: โ€œWrite an 800-word article about the time I got stuck in an elevator with Warren Buffett and Charlie Munger and the investment lessons I learned.โ€

Hereโ€™s the article that the AI generated, unmodified or edited by me:

Getting Stuck in an Elevator with Warren Buffett and Charlie Munger: A Chance Encounter that Taught Me Invaluable Investment Lessons

It was a typical conference day, with attendees buzzing around, eager to learn from the brightest minds in the finance industry. I had just finished a riveting session and was heading to the next one when disaster struck. As I stepped into the elevator, I noticed two familiar faces โ€“ Warren Buffett and Charlie Munger, the legendary duo from Berkshire Hathaway. Little did I know, this chance encounter would turn into an unforgettable experience that would shape my investment philosophy forever.

As the doors closed, the elevator jolted to a sudden halt. We were stuck. The initial silence was broken by Warren’s calm and composed voice, “Well, looks like we’re in this together now.” Charlie, with his signature wit, quipped, “I’ve been stuck with Warren for decades, but this is a new kind of challenge.” The ice was broken, and we began to chat.

Lesson 1: Stay Calm and Patient

As we waited for help to arrive, Warren and Charlie’s demeanor taught me a valuable lesson. Despite being two of the most successful investors in the world, they didn’t panic or get flustered. They took it in stride, and their calmness was infectious. It reminded me of Warren’s famous quote, “Price is what you pay. Value is what you get.” In that moment, I realized that staying calm and patient is crucial in investing, just as it is in dealing with unexpected situations like a stuck elevator.

Lesson 2: Focus on the Big Picture

As we discussed our predicament, Charlie pointed out that getting stuck in an elevator was a minor setback in the grand scheme of things. He drew an analogy with investing, saying, “Just as we’re not defined by this moment, a stock’s value shouldn’t be judged by its short-term performance.” Warren added, “It’s essential to focus on the underlying fundamentals and long-term prospects, rather than getting bogged down by short-term volatility.”

Lesson 3: Diversification is Key

As we chatted, Warren mentioned that Berkshire Hathaway’s diversified portfolio had helped them weather many economic storms. Charlie emphasized the importance of spreading risk, saying, “You don’t want to put all your eggs in one basket, just like you wouldn’t want to be stuck in an elevator with only one exit strategy.” This conversation reinforced the value of diversification in my investment strategy, ensuring that my portfolio is resilient and adaptable to changing market conditions.

Lesson 4: Continuous Learning is Crucial

As we waited, Warren and Charlie engaged in a lively discussion about the latest developments in the tech industry. They were curious, asked questions, and sought to understand the implications of emerging trends. This exchange reminded me that investing is a continuous learning process. Staying up-to-date with market news, reading books, and seeking knowledge from experts are essential to making informed investment decisions.

Lesson 5: Humility and Humor Go a Long Way

Throughout our ordeal, Warren and Charlie’s humility and sense of humor put us all at ease. They joked about their own mistakes and shared stories of their early days in the investment world. This lightheartedness taught me that even the most successful investors can maintain a sense of humor and humility, acknowledging that no one has all the answers.

A Chance Encounter that Changed My Perspective

After what felt like an eternity, the elevator doors finally opened, and we emerged, relieved and grateful. As we parted ways, Warren and Charlie left me with a few parting words of wisdom: “Investing is a marathon, not a sprint. Stay focused, patient, and keep learning.” That chance encounter in the stuck elevator had turned into an invaluable learning experience, shaping my investment philosophy and reminding me of the importance of humility, humor, and a long-term perspective.

As I walked away from the conference center that day, I felt grateful for the unexpected turn of events that had brought me face-to-face with two investment legends. The lessons I learned in that elevator have stayed with me, guiding my investment decisions and reminding me to approach the markets with a clear head, a sense of humor, and a commitment to continuous learning.