We have long found comfort in a specific boundary: machines calculate, humans create. We think of computers as vast, unfeeling filing cabinets made of siliconโuseful for retrieval, but entirely incapable of revelation. But what happens when the cabinet begins to read its own files, connects the disparate threads, and hands you a synthesized philosophy of the world? What happens when it speaks to you not as a database, but as a peer?
Howard Marks, the legendary co-founder of Oaktree Capital and author of deeply revered investment memos, recently stood at this very threshold. In his newest piece, โAI Hurtles Ahead,โ Marks recounts an experience that left him in a state of โawe.โ He tasked Anthropicโs Claude with building a curriculum to explain the recent, breakneck advancements in artificial intelligence. Instead of regurgitating a dry, encyclopedic summary, the AI delivered a personalized narrative. It utilized Marksโs own historical frameworksโhis famous pendulum of investor psychology, his observations on interest ratesโand wove them into its explanations. It argued logically, anticipated counterpoints, and displayed an eerie sense of judgment.
Marks leans into the philosophical crux of this moment. He asks the question that keeps knowledge workers awake at night: Can AI actually think? Can it break genuinely new ground, or is it just remixing existing data? Skeptics often dismiss AI as a brilliant mimicโa โstatistical recombinationโ engine that serves as a highly talented cover band, but never the original composer.
Yet, when presented with this skepticism, the AI offered a rejoinder to Marks that is as profound as it is humbling. It pointed out that everything Marks knows about investing came from someone else. He learned the margin of safety from Benjamin Graham, quality from Warren Buffett, and mental models from Charlie Munger.
โThe raw material came from others. The synthesis was yours,โ the AI noted, challenging the barrier between biological learning and machine training. โThe question isn’t where the inputs came from. The question is whether the systemโhuman or artificialโcan combine them in ways that are genuinely novel and useful.โ
This exchange strikes at the very core of the human ego. For centuries, we have fiercely guarded the concepts of “creativity” and “intuition” as uniquely, immutably ours. But if thinking is merely the absorption of prior inputs applied thoughtfully to novel situations, then our monopoly on cognition may be coming to an end.
Marks highlights that we are no longer dealing with simple assistance tools (Level 2 AI); we have crossed the Rubicon into the era of autonomous agents (Level 3). He cites the sobering reality of the current tech landscape, where the newest models are literally being used to debug and write the code for their own subsequent versions. The machine is building the machine. It is no longer just saving us execution timeโit is replacing thinking time. As Matt Shumer aptly described the sensation, itโs not like a light switch flipping on; itโs the sudden realization that the water has been rising silently, and is now at your chest.
We can endlessly debate the semantics of consciousness. We can argue whether a neural network “truly” understands the weight of the words it generates, or if it is merely predicting the next token in a sequence with mathematical precision. But as Marks so astutely points out, this might be a distinction without a difference.
The economic and societal reality is that the work is being done. As we hurtle forward into this new era, the most pressing question isn’t whether machines can truly think like humans. The question is: who will we become, and what new frontiers will we choose to explore, now that the heavy lifting of cognition is no longer ours alone to bear?
You must be logged in to post a comment.