Categories
AI AI: Large Language Models Anthropic

Breakout

Jack Clark doesn’t panic easily. He spent years at OpenAI watching capabilities inch upward, then left to co-found Anthropic, and has been writing his Import AI newsletter long enough to have developed — and been wrong about — many priors. So when he publishes an essay saying he has reluctantly arrived at a 60% probability that fully automated AI R&D happens by the end of 2028, the word “reluctantly” deserves some weight.

His essay, published last week and titled “Automating AI Research,” isn’t a press release or a fundraising pitch. It reads more like a man thinking out loud at the edge of something large. “I don’t know how to wrap my head around it,” he writes, which is a notable thing to say publicly when you are one of the architects of the thing you can’t wrap your head around.

The argument is built from benchmarks — not any single one, but a mosaic of them assembled to reveal a trend. SWE-Bench, the test that measures an AI’s ability to solve real GitHub issues, was at roughly 2% when it launched in late 2023. A recent Anthropic model sits at 93.9%, effectively saturating it. METR’s time-horizon plot tracks how long an AI can work independently before needing human recalibration: 30 seconds in 2022, 4 minutes in 2023, 40 minutes in 2024, 6 hours in 2025, 12 hours today. The trajectory, if it holds, suggests 100-hour autonomous work sessions by the end of this year.

Clark marshals similar progressions across AI fine-tuning, kernel design, scientific paper replication, and even alignment research itself. His throughline is the same in each: AI is now genuinely competent at the unglamorous scaffolding of AI development — the debugging, the experiment runs, the parameter sweeps, the code reviews. And crucially, it can now do these things not just faster than humans, but for longer, with less supervision.

There’s a Thomas Edison quote at the center of the essay: “Genius is 1% inspiration and 99% perspiration.” Clark’s claim is that AI has become very good at the perspiration. The question of whether it can supply the inspiration — the paradigm-shifting insight, the Move 37 — remains open. But he argues it may not need to. Most of what has moved the AI field forward has been sustained, methodical work, not lone flashes of genius. If you can automate the 99%, you have something that compounds.

There’s a data point that makes Clark’s argument feel less like forecast and more like dispatch. Last month Boris Cherny, who runs Anthropic’s Claude Code, disclosed that he hasn’t written a line of code by hand in more than two months. Every pull request — 22 one day, 27 the next — written entirely by Claude. Company-wide, roughly 70–90% of Anthropic’s code is now AI-generated. Anthropic’s stated position: “We build Claude with Claude.” The loop Clark is describing as a probability by 2028 is already running, at least partially, today.

The word Clark uses for the threshold he’s describing is not “singularity” or “AGI.” It’s quieter than that. He calls it “automated AI R&D” — the point at which a frontier model can autonomously train its own successor. It’s a specific, falsifiable thing. And he puts a number on it: 60% by end of 2028, 30% by end of 2027.

I’ve been writing about the dark software factory and the 3D printer that prints better printers, finding metaphors for what seems like an inexorable process. Clark’s essay is a different kind of writing about the same thing — the primary source document, the engineer’s log, the inventory of evidence. Reading it is a little like watching someone carefully pack boxes before a move. Each individual item seems manageable. But there are a lot of boxes.

What he’s describing — if the trend holds — is not a feature or a product launch. It’s a breakout. The moment the loop closes and the system starts building itself. He’s not certain it happens. He just thinks it’s more likely than not, and he thought you should know.

Categories
AI AI: Large Language Models

The 3D Printer That Prints Better Printers

Imagine a 3D printer that looks at its own design and begins printing a better version of itself. The loop closes. What had always required an external human intelligence now happens inside the machine. All by itself.

Jack Clark — Anthropic co-founder, someone who has spent years closer to this technology than almost anyone — puts the odds of this happening by 2028 at better than even. I have been turning that number over ever since I heard it. Not the technical claim, exactly. The feeling of it.

We have grown used to AI accelerating our work. Coders watch models close GitHub issues at rates that would have seemed miraculous eighteen months ago. Researchers delegate experiment design, kernel optimization, even the fine-tuning of smaller models. The scaffolding of AI progress is already being built, in part, by the systems themselves. But the moment the system begins to redesign the scaffolding — that is something new.

What unsettles me is not the raw capability, though that is staggering. It is the loss of distance.

For most of technological history, the creator stood outside the creation. Even the most sophisticated tools remained tools. Now the distinction begins to blur. A model that can meaningfully improve its own training process, its own architecture, its own alignment constraints, is no longer merely reflecting human intent back at us. It is participating in the shaping of its own nature. And because each iteration can happen faster than the last, the curve steepens in ways our intuitions, tuned to linear progress, struggle to grasp.

Clark is careful, as he should be. He speaks of validation work that will still fall to humans, of the need to broaden the pipes through which abundance flows, of preparing defense-dominant postures against misuse. Yet the image that lingers for me is quieter: the silence after the handoff. What does it feel like when the thing you have been painstakingly teaching begins to teach itself — and then to teach its teachers?

I think about Leo Szilard at the traffic light, or the first controlled chain reaction under the stands at the University of Chicago. Moments when a new regime of possibility quietly announced itself. Recursive self-improvement carries that same charge — not a single event but a process, one that could accelerate the very pace of events themselves.

The more I sit with it, the more I return to an older tension in our relationship with tools. We build them to extend ourselves, and in doing so we are always, subtly, extending — or perhaps risking — what we are. The values I try to live by — generosity, curiosity, compassionate honesty — are not refined in specifications. They are refined in friction, in relationship, in the slow work of being human with other humans. If the machines begin to optimize their own lineage at speeds we cannot match, will we still have the bandwidth to tend the parts of ourselves that no algorithm can yet measure?

I don’t know. None of us do. That uncertainty feels honest.

What feels clearer is the invitation. Not to fear the printer that prints better printers, nor to worship it, but to remain awake inside the loop. To ask, as each new version arrives, what kind of world we are collectively printing — and whether the values we claim to hold are baked into the design or merely etched on the surface, likely to wear away under the heat of iteration.

The light is still yellow. We are still deciding whether to step off the curb. But the traffic is already moving faster than it was a moment ago.