Categories
AI

Beyond the Summary: Using AI to Find the “Friction” in Your Thinking

Weโ€™ve reached the “Summary Plateau.”

You see it everywhere. Every browser extension, every note-taking app, and every enterprise LLM now offers a “Summarize” button. Itโ€™s the ultimate promise of the efficiency era: Give us the 2,000-word essay, and weโ€™ll give you the three bullet points. But thereโ€™s a hidden tax on this kind of efficiency. When we ask an AI to summarize, we are asking it to smooth out the edges. We are asking it to remove the “noise.” The problem is, in the world of ideas, the noise is often where the signal lives. The frictionโ€”the parts of an argument that make us uncomfortable or that we don’t quite understandโ€”is where the actual learning happens.

If we only consume the summaries, we aren’t thinking; weโ€™re just acknowledging.

The Mirror, Not the Maker

Iโ€™ve been experimenting with a different approach. Instead of asking the model to make the content shorter, Iโ€™ve been asking it to make my engagement with the content harder.

I don’t want a “Maker” to write my thoughts for me. I want a “Mirror” to show me where my thoughts are thin.

When Iโ€™m wrestling with a complex pieceโ€”perhaps a deep dive on the future of venture capital or a philosophical treatise on Areteโ€”Iโ€™ve stopped clicking “summarize.” Instead, I feed the text into the LLM and use these “Friction Prompts” to find the sand in the gears:

The Essential Toolkit

  • The “Steel Man” Challenge: “I am inclined to agree with this authorโ€™s conclusion. Find the three strongest counter-arguments that this text ignores, and explain why a reasonable person would hold them.”
  • The “Recursive Logic” Audit: “Identify the three most critical ‘logical leaps’ the author makesโ€”points where a conclusion is reached without sufficient evidence. If those leaps are wrong, how does the entire argument collapse?”
  • The “Blind Spot” Audit: “What are the underlying cultural or economic assumptions this author is making that they haven’t explicitly stated?”
  • The “Cross-Pollination” Filter: “Connect the central thesis of this article to a seemingly unrelated field (e.g., Stoic philosophy or biological ecosystems). How does the logic of this text hold upโ€”or failโ€”when applied to that different domain?”
  • The “Analog Translation” Test: “If I had to explain the core mechanism of this abstract concept using only physical, analog metaphors (like plumbing or woodworking), how would I do it? Where does the metaphor break down?”
  • The “Socratic Sharpening”: “Don’t summarize this. Instead, ask me three probing questions that force me to apply the core logic of this essay to a completely different industry.”

Sharpening the Blade

Summary is about completion (getting it done). Friction is about cognition (getting it right).

When the AI points out a blind spot in an article I loved, it creates a moment of cognitive dissonance. That “click” of discomfort is the sound of a mental model being updated. Itโ€™s the digital equivalent of using a whetstone on a bladeโ€”you need the friction to get the edge.

As we move further into this age of “Flash-Frozen Cognition,” the temptation to automate our understanding will only grow. But discernmentโ€”that uniquely human trait weโ€™ve discussed here beforeโ€”cannot be outsourced to a bulleted list.

The next time youโ€™re faced with a daunting PDF or a dense long-read, resist the “Summarize” button. Ask the machine to challenge you instead. You might find that the most valuable thing the AI can give you isn’t an answer, but a better version of your own question.


A Deep Dive (Further Reading from the Archive)

If you resonated with this piece on cultivating discernment, you might find these earlier synthesis experiments worth a revisit:

  • On Flash-Frozen Cognition: A foundational post discussing how LLMs are freezing the current consensus, and how we must resist it.
  • The Harvest and the Algorithm: Comparing 1920s ice harvesting to 2020s cognitionโ€”the critical shift from scarcity to abundance.
  • The Arete of Attention: A look at the Stoic concept of virtue as the intentional direction of our most scarce resource: focus.
  • Longhand Thinking: Why the physical act of writing is the ultimate antidote to digital velocity.
Categories
AI Creativity Writing

Did You Really Program That?

The Fundamental Issue

I once found myself in a local restaurant filled with young professors and graduate students from a nearby university. They were clustered around a long table arguing about the nature of originality in a world where machines could now produce human-like text and code with a few keystrokes. I sat at a small table nearby, eavesdropping.

“I just don’t think it’s right,” said a woman with steel-rimmed glasses. “If you’re using AI to write your paper, you should be honest about it. It’s intellectually dishonest otherwise.”

Her companion, a man with unruly hair and a cardigan stretched at the elbows, shook his head vigorously. “But what about the code you’re writing? Aren’t you using GitHub Copilot? Isn’t that the same thing?”

The question hung in the air between them.

The Contested Border

The border between human creativity and machine assistance has always been contested territory. When the word processor replaced the typewriter, did writers suddenly become less authentic? When compilers made it unnecessary to understand assembly language, did programmers become less skilled? Each technological advancement seems to bring with it a fresh anxiety about the dilution of human agency, a sense that we are somehow cheating if we don’t do things the โ€œhard wayโ€.

I recently visited a friend who works at a technology startup in San Francisco. His office was a converted warehouse with exposed brick and polished concrete floors. The ceiling was high enough that you could fly a small drone inside without hitting anything. Software engineers clustered around monitors, wearing noise-canceling headphones and drinking coffee from biodegradable cups. My friend showed me a tool called Cursor, which allows programmers to describe what they want a program to do in plain English, and then generates the code automatically.

“It’s called ‘vibe coding,'” he explained, showing me the interface. “You sort ofโ€ฆ gesture at what you want, and the AI figures out how to make it happen.”

I watched as he typed a simple instruction: “Create a function that calculates the Fibonacci sequence up to the nth term.” The AI responded with a dozen lines of code, neatly formatted and commented. My friend nodded approvingly and made a few small adjustments.

“Did you really program that?” I asked.

He laughed. “Define ‘program.’ I told it what I wanted. It wrote the code. I checked it and made a few tweaks. Is that programming? I don’t know. But I’m still responsible for the end result.”

Tools like Cursor and Windsurf are all the rage lately among software engineers as they provide truly dramatic productivity boosts to those writing code.

The Woodworker’s Tools

The discussion reminded me of a conversation years ago with a group of master woodworkers. They were craftsmen who built furniture by hand, using tools that hadn’t changed much in centuries. I asked one of them, a man with fingers gnarled by decades of work, what he thought about power tools.

“People think using hand tools makes you more authentic,” he said, running his palm along the grain of a maple board. “But the old masters would have used power tools if they’d had them. The point isn’t the tool. It’s what you’re trying to create, and whether you understand what you’re doing.”

He showed me a dovetail joint he’d cut with a table saw and jig. “Is this less authentic because I didn’t use a hand saw? The joint is still tight. The wood is still joined. I still had to understand the properties of the wood and how the joint works.”

Writers and programmers alike are wrestling with similar questions. When does technological assistance become a crutch? When does it become cheating? The novelist who uses a thesaurus is not accused of intellectual dishonesty. The programmer who uses a library of pre-written functions is not condemned for laziness. But something about AI assistance feels different to many people.

The Future of Creation?

Perhaps it’s the speed. A process that once took hours now takes seconds. Perhaps it’s the black-box nature of the technology. We cannot see how the AI arrived at its solution, cannot trace the path of its reasoning. We think theyโ€™re just dumb machines probabilistically predicting the next word. Or perhaps it’s simply that we are witnessing a fundamental shift in what it means to create.

My programmer friend has a different perspective. “The future of programming isn’t writing code,” he says. “It’s understanding problems and directing machines to solve them. The code is just an implementation detail.”

I wonder if writers will come to feel the same way. Will the future of writing be less about crafting individual sentences and more about directing AI to capture a particular voice or style? Will we come to see the arrangement of words as merely an implementation detail in the larger project of communication? How does this extend to other fields like film, movies and art?

The Disclosure Dilemma

The question of disclosure remains thorny. Should writers and programmers be required to disclose their use of AI assistance? Some argue that it’s essential for transparency and accountability. Others suggest that it’s no different from any other tool, and that the focus should be on the final product, not the process used to create it.

I think of the woodworker showing me his dovetail joint. “The wood doesn’t care how you cut it,” he said. “It only cares that the joint is tight.”

Perhaps the same is true of writing and programming. Many readers wonโ€™t care how the words were arranged, only that they resonate. The software user doesn’t care how the code was written, only that it works.

And yet, there is something deep within us that values the human touch, that finds meaning in the knowledge that another person’s mind and hands shaped the thing we’re experiencing. We want to know that somewhere in the process, a human being made choices, experienced frustration and triumph, poured their unique perspective into the creation.

As I left the restaurant I mentioned earlier the debate at the long table was still going strong. I caught a final snippet as I passed by: “It’s not about the tools,” someone was saying. “It’s about the intention.”

Perhaps that’s the heart of it. Not what tools we use, but how we use them, and why. Not whether we use AI, but whether we use it thoughtfully, with intention and understanding. Not whether we disclose its use, but whether we’re honest about our process, both with ourselves and with others.

Thereโ€™s no question the AI tools are here and that theyโ€™re improving dramatically seemingly every day. Theyโ€™re providing some powerful leverage to amplify our own skills – if we choose to use them wisely.

Note: this initial idea for this post was mine triggered by listening to a podcast interview with Dan Shipper of Every. I had help fleshing it out using Claude 3.7 from Anthropic. The post began with a couple of paragraphs I wrote. Then I used the following prompt: โ€œYouโ€™re an expert writer and editor helping me with my personal blog. Write a 1000 word blog post in the style of John McPhee based on the following initial thoughtsโ€ฆโ€ After that I rewrote portions of Claudeโ€™s response to add clarity and emphasis before sharing it here.

Note 2: all of this was done on my iPhone.