Categories
AI AI: Large Language Models China

Cranes on the Horizon

In 2005, during my first trip to Shanghai and Beijing, the most striking feature of the skyline wasn’t the architecture—it was the cranes. More than I could possibly count, perched atop half-finished skyscrapers like a mechanical forest. Entire districts seemed to be mid-construction simultaneously, as if someone had pressed a button and the whole country decided to build everything at once. Dan Wang in his book “Breakneck” described China as the “engineering state” that approaches national problems with physical solutions. Back in 2005, coming from Silicon Valley, I thought I understood what growth looked like. I didn’t.

I’ve been thinking about that trip while reading Nathan Lambert’s recent piece, “Notes from Inside China’s AI Labs.” Lambert — who runs the Interconnects newsletter and does serious work tracking the open-weight LLM ecosystem — just returned from visiting essentially every major AI lab in China. Moonshot, Zhipu, Meituan, Xiaomi, Qwen, Ant Ling, 01.ai. He went in with genuine curiosity and came back with humility. That combination is rarer than it should be.

What he found was the cranes. Different domain, same energy.

Lambert’s central observation is about culture, not capability. The Chinese labs aren’t winning on any single technical breakthrough — they’re winning on execution discipline. He describes researchers, many of them active students, who bring no ego to the work. They absorb context fast, drop assumptions faster, and seem genuinely unbothered by the philosophical debates that seem to swirl constantly in the American AI community. When he tried to engage Chinese researchers on the long-term social risks of models or the ethics of AI behavior, those questions “hung in the air with a simple confusion. It’s a category error to them.” Their role is to build the best model. Full stop. To them, an LLM isn’t a philosophical entity to be interrogated; it’s a piece of infrastructure to be optimized.

That description landed for me. Not as a criticism of American research culture, but as a real observation about what the moment demands. Building good LLMs today is, as Lambert puts it, meticulous work across the entire stack — “all points of the model can give some improvements, and fitting them in together is a complex process.”

The work that matters most right now isn’t the 0-to-1 creative leap; it’s the thousand unglamorous decisions executed without complaint. Students who haven’t yet learned to lobby for their own ideas turn out to be well-suited for exactly this.

Lambert ends on a note that’s hard to shake. Looking up from his laptop on a high-speed train, he keeps seeing cranes on the horizon. He draws the same connection I did, though from the inside: the construction everywhere fits the broader culture and energy around building. “When I look up from my laptop and always see bunches of cranes on the horizon, it obviously fits in with the broader culture and energy around building in China.”

Twenty years after my first visit, the cranes are still there. They’ve just moved indoors — into server rooms and training runs and model releases that land every few months with quiet confidence. In 2005, what China was building was obvious: you could see the steel frames going up. What’s being built now is harder to see, which may be exactly why it keeps surprising us.

Check out Lambert’s essay – it’s remarkable. If the 20th century was defined by who could move the most earth, the 21st will be defined by who can move the most tokens. And right now, the cranes are moving faster than we think.

Categories
AI AI: Large Language Models

The Echo Effect: Why Prompt Repetition is AI’s Best Kept Secret

In our relentless pursuit of complexity, we often overlook the elegant simplicity of a fundamental human habit: repeating ourselves.

We build colossal architectures, weave intricate neural networks, and throw mountains of computational power at our artificial intelligence systems, hoping to squeeze out a few more drops of reasoning and logic. Yet, sometimes the most profound breakthroughs require no new code, no additional latency, and no extra training data.

Sometimes, you just have to say it twice.

In a fascinating December 2025 paper titled Prompt Repetition Improves Non-Reasoning LLMs,” researchers Yaniv Leviathan, Matan Kalman, and Yossi Matias uncovered an almost absurdly simple “free lunch” in AI optimization.

Their premise is straightforward: when you aren’t using a heavy reasoning model, simply copying and pasting your input prompt multiple times significantly boosts the model’s performance.

“When not using reasoning, repeating the input prompt improves performance for popular models (Gemini, GPT, Claude, and Deepseek) without increasing the number of generated tokens or latency.”

The mechanics behind this are elegantly pragmatic.

By repeating the prompt, you are moving the heavy computational lifting to the parallelizable “pre-fill” stage of the model’s processing. The AI’s causal attention mechanism gets to process the same tokens again, allowing the later iterations of the prompt to attend to the earlier ones. It effectively acts as a hack to simulate bidirectional attention in a decoder-only architecture.

What’s even more telling is the paper’s observation on why this works so well.

The researchers noted that models trained with Reinforcement Learning (like OpenAI’s deep-thinking variants) naturally learn to “restate the problem” in their internal monologue. They figured out on their own what these researchers are suggesting we do manually: repeat the question to focus the mind.

Reading this paper, I couldn’t help but draw a parallel to the human condition and the nature of listening.

How often do we assume that because we have articulated a thought once, it has been fully absorbed? We fire off a single, dense instruction to a colleague, a partner, or a friend, and then marvel when the nuance is lost in translation.

We suffer from our own attention bottlenecks.

Like a non-reasoning LLM trying to parse a complex query in a single pass, we are constantly bombarded with a stream of tokens—emails, notifications, conversations, fleeting thoughts. To truly understand, to truly digest and synthesize information, we need the grace of repetition.

There is a strange poetry in the fact that to make our most advanced digital minds smarter, we have to talk to them the way we talk to a distracted child or a busy spouse. The “microscope effect” highlighted in the study—where repeating a prompt drastically improved extraction tasks—shows that the failure wasn’t in the model’s capacity to know, but in its capacity to focus. Repetition forces focus. It creates a resonant echo in the context window, a digital highlighter that screams, “This matters. Look here again.”

As we continue to navigate a world increasingly augmented by artificial intelligence, this paper serves as a humbling reminder. The bleeding edge of technology isn’t always found in the most complex equation; sometimes, it’s hidden in the most basic principles of communication.

Whether you’re prompting a billion-parameter language model or trying to connect with the human sitting across from you, the lesson is clear.

Clarity isn’t just about the words you choose. It’s about giving those words the space, the resonance, and the repetition they need to be truly understood.

Say it once to be heard; say it twice to be understood.

Categories
AI AI: Large Language Models Apple

Why AI Works

Based upon his own personal explorations of why AI large language models work so well, former Apple exec Bertrand Serlet has created an excellent 30 minute video introduction to them. He introduces the notion of the “curse of dimensionality” – how the scale of LLMs increase so dramatically – and then the “blessing of dimensionality” as helping to explain some of the “magic” of neural networks. Worth watching!