Six skill benchmarks, the 99% perspiration thesis, and the question Clark leaves open

By Thorsten Meyer — May 2026

The section of Jack Clark’s Import AI #455 that does the most empirical work toward the automated AI R&D thesis is the one titled “AI is getting good at core science skills essential to AI R&D.” Clark catalogs six skill benchmarks, walks through the trajectory on each, sets up the Edison “1% inspiration, 99% perspiration” framing, gestures at the creativity question through Move 37 and Erdos problems, and lands on a specific conclusion: “AI can today automate vast swatches, perhaps the entirety, of AI engineering. It is not yet clear how much of AI research it can automate, given that some aspects of research may be distinct from the engineering skills.”

This is the second piece in the outside read series on Clark’s essay. The first piece addressed Clark’s “coding singularity” section and argued that coding is the wedge into recursive self-improvement. This piece works through Clark’s evidence base for automated AI R&D specifically — the six skill benchmarks, the perspiration-vs-inspiration framing, and what I take to be the structural question Clark leaves open.

The headline finding: Clark’s conclusion is correct and possibly understated for engineering. The residual research question is real but may be less binding than the framing suggests. Engineering is automated. Research is the residual. The structural read is that research may itself be engineering at scale — in which case the residual closes faster than Clark’s framing implies. The institutional response should not bet on inspiration being a permanent moat.

What follows is the walk-through of Clark’s six benchmarks with my reading on each, the analysis of the perspiration-vs-inspiration framing, the five strategic dimensions Clark doesn’t develop, and the structural read on what this implies for the next 32 months.

Engineering Is Automated. Research Is the Residual.
DISPATCH / MAY 2026 CLARK EXTENDED · AUTOMATED AI R&D · OUTSIDE READ 02
▲ The Outside Read 02 Engineering / Residual · May 2026
Six Skill Benchmarks · The 99% Perspiration Thesis · Outside Read 02

Engineering is automated.
Research is the residual.

Six skill benchmarks. Edison’s framing. The question Clark leaves open is whether research is just engineering at scale.

Jack Clark’s Import AI #455 catalogs six benchmarks measuring AI capability on AI R&D tasks and concludes “AI can today automate vast swatches, perhaps the entirety, of AI engineering.” The residual question is research. The structural read on the residual: it may not be a permanent moat.

99%
Perspiration
Automated
/
1%
Inspiration
Residual
Edison · 150 years on · still right
The structural read
AI is excellent at the 99% of AI R&D — engineering, optimization, kernel design, fine-tuning. The 1% inspiration may be a permanent moat. Or it may dissolve as inspiration is recognized as compressed perspiration.
52×
AI speedup · Mythos · Anthropic CPU task
vs 4× human in 4-8 hours · 13× faster than researchers
95.5%
CORE-Bench · declared “solved” Dec 2025
Up from 21.5% Sep 2024 · paper reproduction · saturated
6 of 6
Skill benchmarks converging on saturation
CORE · MLE · Kernel · PostTrain · CPU · Alignment
1 / 700
Erdos problems · “interesting” solutions
Inspiration data point · ambiguous reading
CPU SPEEDUP TASK 2.9× → 16.5× → 30× → 52× IN 11 MONTHS · 13× HUMAN BASELINE CORE-BENCH SOLVED 21.5% → 95.5% IN 15 MONTHS · BENCHMARK AUTHOR DECLARED IT COMPLETE MLE-BENCH PAUSED 16.9% → 64.4% · LEADERBOARD PAUSED APRIL 2026 FOR FAIR-COMPARISON REWORK POSTTRAINBENCH AI 25-28% VS HUMAN 51% · HALF HUMAN BASELINE · THE RECURSIVE TRIGGER RESIDUAL QUESTION ERDŐS 13/700 · 1 INTERESTING · MOVE 37 STILL UNREPLACED AFTER 10 YEARS ENGINEERING IS AUTOMATED RESEARCH IS THE RESIDUAL CPU SPEEDUP TASK 2.9× → 52× IN 11 MONTHS · 13× HUMAN BASELINE CORE-BENCH SOLVED 21.5% → 95.5% IN 15 MONTHS
The six skill benchmarks · all converging on saturation

Six skills. One trajectory.

Clark catalogs six benchmarks measuring AI capability on AI R&D-relevant tasks. Each individual benchmark could be noise. Six benchmarks moving together is a curve. The pattern is the cascade observed across the broader Clark series — visible here in the specific R&D-skill domain.

The six skill benchmarks · trajectory data
Five of six saturated or paused; one (PostTrainBench) at half human baseline — the recursive trigger.
CORE-BenchResearch reproduction
21.5% Sep 2024 → 95.5% Dec 2025 (Opus 4.5). Benchmark author declared it “solved.” 15 months. 4.4× improvement. Research replication = solved engineering problem.
SOLVED
MLE-BenchKaggle competitions
16.9% Oct 2024 → 64.4% Feb 2026 (Gemini 3). 16 months. Leaderboard paused April 2026 pending fair-comparison rework. ~Bronze-medal-or-better on 2/3 of 75 Kaggle competitions.
PAUSED
Kernel designGPU optimization
No single benchmark. Multiple production papers across 2025-2026. Meta uses LLMs for Triton kernels in production. AscendCraft for Huawei. From research curiosity to deployment standard.
PRODUCTION
PostTrainBenchAI fine-tuning AI
Opus 4.6 / GPT-5.4 at 25-28% vs human 51%. AI currently at half human baseline. The recursive self-improvement trigger — leading indicator for AI exceeding human on training AI.
HALF-HUMAN
Anthropic CPULLM training speedup
2.9× May 2025 → 16.5× → 30× → 52× April 2026. 11 months. Human baseline: 4× in 4-8 hours. Mythos is 13× faster than a researcher on a full workday’s task.
13× HUMAN
Automated alignmentAnthropic proof-of-concept
Anthropic’s AI agents beat human-designed baseline on scalable oversight. Small-scale, not yet production. The most consequential benchmark — AI doing AI alignment research is the recursive concern.
PROOF-OF-CONCEPT
Engineering is automated. The question is whether research is residual.
The 1% inspiration question · creativity data points
CLAUDE AI UNLEASHED From First Prompts to Pro: The Complete Guide to Claude AI for Writing, Research, Coding, and Business (The Claude AI Mastery Series)

CLAUDE AI UNLEASHED From First Prompts to Pro: The Complete Guide to Claude AI for Writing, Research, Coding, and Business (The Claude AI Mastery Series)

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Three data points. Mixed signal.

Clark provides three data points on the creative-spark question. Yes-evidence: Erdős-1051, centaur math discovery, sporadic Move-37-style moments. No-evidence: low yield, framing dependence, absence of acceleration. The mixed signal is the honest read.

The creativity data · three observations
Inspiration data isn’t dispositive; the next 12-24 months produce the empirical resolution.
▲ Move 37 · 2016
AlphaGo’s creative move
10 yrssince · no replacement
Canonical example of AI producing creative-feeling insight. 10 years on, Move 37 hasn’t been replaced by a comparably impressive flash of insight. Capability has risen dramatically; discovery moments haven’t.
Weakly bearish signal · per Clark
▲ Erdős Problems · 2025-26
Math team + Gemini
13 / 7001 “interesting”
Team attacked ~700 problems with Gemini. Got 13 solutions; 1 deemed “interesting” (Erdős-1051). Conservatively framed: “slightly non-trivial,” “somewhat broader,” “mild.” 0.14% rate of interesting insights from massive parallel exploration.
Ambiguous · low yield, real result
▲ Centaur Discovery · 2026
Real math proof
substantialGemini contribution
UBC/UNSW/Stanford/DeepMind paper with “very substantial input from Google Gemini and related tools.” Real proof, real publication. “Centaur” framing — human + AI together — not AI alone. Real research advance through partnership.
Yes-evidence · with caveat

The data supports two readings. Pessimistic: rare moments suggest creative insight is qualitatively distinct from engineering work. Optimistic: rare moments are an artifact of low-volume exploration; more shots on goal yields more discoveries. Both readings are consistent with Clark’s “vast swatches, perhaps the entirety” claim. They differ on the residual.

What Clark doesn’t develop · five strategic dimensions
AI Engineering: Building Applications with Foundation Models

AI Engineering: Building Applications with Foundation Models

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Five dimensions Clark gestures at but leaves underdeveloped.

Clark’s section is rigorous on the empirical evidence. Five strategic dimensions matter for the institutional response that the Clark series synthesis argues is structurally inadequate.

Five strategic dimensions Clark doesn’t develop
Each affects the institutional response calibration for the 32-month window.
01
The competitive lab dynamic
Each lab publishes capability data as competitive positioning. Labs that automate R&D pull ahead structurally — their next model is trained by AI agents more capable than competitors’. No lab can unilaterally slow down without losing the race. Coordination problem at scale.
COMPETITION
02
The interpretability gap
When AI does the R&D, humans understand less about how next models are made. Hyperparameters, training data composition, optimization decisions — all from AI agents. Interpretability of outputs assumes you know how the model was built. The assumption is slipping.
INTERPRETABILITY
03
The brain drain question
Senior researchers move up the abstraction stack. Entry-level apprenticeship through engineering schlep is closed. Same “missing generation” dynamic as software engineering. Remaining human AI talent concentrates at frontier labs with the agent infrastructure.
LABOR MARKET
04
The volume thesis · more shots on goal
If inspiration is volume-derived, more compute for R&D exploration = more rare discoveries. Compute capacity directly translates to research output velocity. Compute geography becomes research geography. Frontier labs with privileged compute capture the volume upside.
COMPUTE = RESEARCH
05
The recursive alignment concern
Automated alignment research means AI produces the alignment knowledge AI is aligned by. Verifier and system are the same generation of AI. Anthropic’s proof-of-concept makes this operational. Current peer review and publication frameworks weren’t designed for this.
VERIFIER-SUBJECT UNITY
The two readings · does inspiration bound the trajectory?
Coding with AI For Dummies

Coding with AI For Dummies

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Two readings. Different equilibria.

The structural question Clark leaves open: is research a permanent moat that bounds automated AI R&D, or is it engineering at scale that dissolves with more shots on goal? Both readings are consistent with the current data. They differ by orders of magnitude in consequences.

Two readings of the residual question
Both consistent with Clark’s evidence. The next 12-24 months resolve the empirical question.
▲ READING 01 · INSPIRATION IS BINDING
Research is qualitatively distinct.
Creative insight is something AI fundamentally lacks. Rare discovery moments don’t accelerate with capability. Research bounds the trajectory at human-research-pace.
Supporting evidence: Move 37 unreplaced for 10 years. Erdős discovery at 0.14% yield. PostTrainBench at half human baseline. Centaur configuration prevalent — AI not autonomous in research.
Consequence:
Productivity multiplier years
▲ READING 02 · INSPIRATION IS COMPRESSED PERSPIRATION
Research is engineering at scale.
Rare discovery moments are an artifact of low-volume exploration. More shots on goal yields more discoveries proportionally. Research dissolves as automated R&D scales.
Supporting evidence: CPU speedup at 13× human on optimization tasks. Six benchmarks converging on saturation. Vaswani et al. transformer insight emerged from iteration. Inspiration historically inseparable from perspiration.
Consequence:
Recursive loop operational
Stakeholder implications · five audiences
Human–AI Collaboration in Research: Practical Applications, Ethical Frameworks, and Future Directions

Human–AI Collaboration in Research: Practical Applications, Ethical Frameworks, and Future Directions

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Five audiences. Asymmetric cost of being wrong.

The institutional response should not bet on inspiration being a permanent moat. If the distinction holds, capacity built is still useful. If it closes, capacity is necessary. Asymmetric cost-of-being-wrong points toward building now.

Stakeholder implications · by audience
Career, research strategy, policy framework, investment thesis, public engagement.
▲ FOR AI RESEARCHERS
IN INDUSTRY
Senior-as-supervisor is the durable role.
Engineering work — kernel design, training optimization, paper reproduction — is being automated. Career value moves up the abstraction stack: research direction setting, supervision of AI agents, validation of AI-produced outputs. Plan for the supervisor role; treat the implementer role as table stakes.
▲ FOR AI RESEARCHERS
IN ACADEMIA
Inspiration-heavy work is the comparative advantage.
Academic labs can’t compete on volume with frontier-lab automated R&D pipelines. Focus on the inspiration-heavy work: theoretical foundations, interpretability methodology, alignment frameworks, evaluation design. 1 deep insight beats 1000 quick experiments in the bounded-academic-compute regime.
▲ FOR
POLICYMAKERS
The framework is built for human researchers.
Current policy treats AI R&D as something done by human researchers in regulated organizations. Framework breaks when AI agents do most of the R&D. Liability for AI-produced research outputs? Corporate disclosure for AI-driven research? Regulation when researcher and subject are both AI? None of these have current answers.
▲ FOR
INVESTORS
Lab competition is productivity multiplier #2.
(a) Labs with the best automated R&D pipelines pull ahead structurally. Anthropic CPU speedup (2.9× → 52×) is the publicly available signal. (b) Compute as research input — the volume thesis means compute capacity translates to research velocity. Compute supply governance is the new AI research moat.
▲ FOR
EVERYONE ELSE
The wedge has produced the recursive loop.
The coding singularity piece argued coding is the wedge into recursive self-improvement. This piece shows the wedge has produced the capability set required for the loop to be operational at the engineering layer. The residual question — research — resolves over the next 12-24 months. What gets built institutionally during that period determines the equilibrium.

Engineering is automated. The residual is the question. The institutional response should not bet on inspiration being a permanent moat.

— The structural read · May 2026

I · The six skill benchmarks, with current data

Clark catalogs six benchmarks measuring AI capability on AI R&D-relevant tasks. The trajectories matter individually; the pattern across them matters more. All six are saturating or trending toward saturation on similar cadences. This is the same pattern the benchmark cascade piece in my Clark series articulated — but the specific evidence base in Clark’s “AI getting good at core science skills” section is what makes the cascade visible.

1 · CORE-Bench (research reproduction)

The trajectory: 21.5% (GPT-4o + CORE-Agent scaffold, September 2024) → 95.5% (Opus 4.5, December 2025), with one of the benchmark’s authors publicly declaring it “solved.” Fifteen months. A 4.4× improvement.

Reading this as a researcher who has tried to reproduce papers: reproducing arbitrary computational research papers is hard. The benchmark specifies that the agent must install libraries and dependencies, run the code, search through outputs, and answer questions about results. Each step is a place where ordinary frictions defeat human researchers regularly. Dependency conflicts, version mismatches, undocumented preprocessing, missing data, hardware-specific behavior. CORE-Bench at 95.5% means an AI system handles these frictions at the level a competent post-doc would. The “solved” framing from the benchmark author is reasonable; whatever capacity remains in the 4.5% gap is probably at the noise floor of the benchmark itself.

What this implies operationally: the bottleneck on reproducing existing research is no longer “can it be reproduced.” It’s “should it be reproduced.” When an AI agent can take a paper and run its experiments at 95.5% reliability, the marginal cost of reproducing any specific paper drops to essentially the inference cost of running the agent. Research replication, which has been an ongoing crisis in academic ML, is now a solved engineering problem.

2 · MLE-Bench (Kaggle competitions)

The trajectory: 16.9% (o1-preview with AIDE scaffolding, October 2024) → 64.4% (Gemini 3 in agent harness with search, February 2026). Sixteen months. ~3.8× improvement.

The benchmark setup is concrete: 75 Kaggle competitions across NLP, computer vision, signal processing. Bronze-medal-or-better threshold. A 64.4% score means the AI agent reaches bronze-medal performance on roughly two-thirds of competitions — competitions where the human medalists are professional ML practitioners. This is competitive with mid-tier human Kaggle performance.

What’s worth noting: OpenAI paused the MLE-bench leaderboard on April 24, 2026 — “we are currently not taking any new submissions to the leaderboard while we develop an improved process for ensuring submissions are fair and comparable.” The benchmark organizers are responding to the same problem METR is responding to with Time Horizon 1.1: the measurement instrument is being out-grown by the models. When the leaderboard custodians have to pause submissions to design a more fair comparison process, the underlying capability has moved past what the original benchmark was designed to measure.

The pattern holds across CORE-Bench, MLE-Bench, and (as the coding singularity piece documented) METR’s time horizons. Three independent benchmarks measuring three different AI R&D skill domains are all hitting the saturation or measurement-limit point on overlapping timelines. This is the cascade.

3 · Kernel design

Clark notes there’s no single popular benchmark here, so we can’t model progress over time on a single curve. What there is instead: a stream of research papers across 2025-2026 demonstrating concrete advances. DeepSeek models building better GPU kernels (Import AI #400). Automated PyTorch-to-CUDA conversion (#401). Meta using LLMs to generate optimized Triton kernels for production infrastructure (#439). AscendCraft for Huawei Ascend chips (#444). Fine-tuned open-weights models for kernel design (“Cuda Agent”, #448).

This is the texture of a capability becoming production-grade. When the literature is full of “we used LLMs to optimize kernels for X” papers across a span of months, with each paper showing meaningful results, the underlying capability has crossed from research curiosity to deployment standard. Meta’s Triton work is the most consequential of these — using LLMs to produce optimized kernels for production infrastructure, not just for benchmarks. The transition from “AI can help with kernel design in papers” to “AI generates the kernels we run in production” is the deployment transition.

Clark’s caveat is honest: kernel design has properties (verifiable rewards, well-defined optimization targets) that make it unusually amenable to AI-driven R&D. This is a real qualification. It also undercuts the qualification a bit when you observe that AI R&D itself has unusually verifiable rewards for the engineering layer — does the model train? Does the loss curve descend? Does benchmark performance improve? The verifiability that makes kernel design tractable is structurally similar to what makes most AI engineering tractable.

4 · PostTrainBench (AI fine-tuning AI)

The setup: AI systems take smaller open-weight models (Qwen 3 1.7B, Qwen 3 4B, SmolLM3-3B, Gemma 3 4B) and fine-tune them to improve performance on benchmarks (AIME 2025, Arena Hard, BFCL, GPQA Main, GSM8K, HealthBench, HumanEval). Weighted average across all combinations. Human baseline: 51% (the existing instruct-tuned versions, developed by frontier-lab researchers).

The AI scores as of April 2026: Opus 4.6 at 25-28%, GPT 5.4 at similar levels. Approximately half the human baseline.

This is the most editorially significant of Clark’s six benchmarks — and it’s the one I wish the discourse paid more attention to. PostTrainBench is the recursive self-improvement task: AI training AI. The human baseline is the production work of talented frontier-lab researchers. AI at half the human baseline is not “AI can do this work.” It is “AI is currently bounded by the human baseline by roughly 2×.”

The trajectory matters more than the level. At half human baseline, the question is how fast AI closes the gap. If the cadence on the other five benchmarks is the reference (~3-5× improvement per 12-18 months), PostTrainBench should hit human parity within 12 months and exceed it within 18-24. The recursive self-improvement loop becomes structurally operational at the moment AI exceeds the human baseline on this specific task. PostTrainBench is the leading indicator for that threshold.

5 · Anthropic CPU speedup task

The trajectory Clark cites: 2.9× (Opus 4, May 2025) → 16.5× (Opus 4.5, November 2025) → 30× (Opus 4.6, February 2026) → 52× (Mythos Preview, April 2026). Human baseline for comparison: 4× speedup in 4-8 hours.

This is the single most striking data point in Clark’s section, and it deserves explicit treatment.

The 52× score means Mythos Preview achieves a 52× speedup on a CPU-only LLM training implementation. The human baseline of 4× in 4-8 hours describes what a competent researcher can do in roughly a day of focused work. Mythos exceeds the human baseline by 13×. Not 13% faster than human. Thirteen times faster than human on a task that takes a human a full workday.

What this is measuring concretely: the AI is exploring optimization paths in a constrained-but-realistic optimization problem and finding speedups well beyond what the human-baseline approach yields. This is exactly the kind of perspiration task Edison was describing. The optimization problem is well-specified. The reward is verifiable. The search space is large but tractable. AI does it 13× better than a human researcher because it explores more of the search space, applies more optimizations in combination, and isn’t bottlenecked by the human cognitive overhead of holding multiple optimization paths in mind simultaneously.

The trajectory from 2.9× to 52× in 11 months is the canonical example of AI doing AI work. This isn’t AI doing customer service or content generation or coding to spec. This is AI optimizing the training pipeline of an LLM — the exact problem domain where frontier-lab researchers spend their careers. And on this specific problem, AI is currently 13× faster than the researchers who would be doing it manually.

6 · Automated alignment research

The Anthropic proof-of-concept: AI agents primed with a research direction autonomously attempt to beat a human baseline on a scalable oversight problem. The agents succeeded at beating the human-designed baseline. Small-scale, not yet generalizing to production models, but it works.

This is the most consequential of the six benchmarks for the broader thesis. The same Anthropic that publishes Clark’s essay arguing 60%+ probability of automated AI R&D by end of 2028 has internally demonstrated proof-of-concept of AI doing AI alignment research. The proof-of-concept is the most credible signal of where capability sits, because it’s done by the people who would know.

What this implies for the recursive concern: if AI is doing alignment research on AI, the compounding error problem becomes the relevant frame for evaluating the outputs. Is the AI’s alignment research checking the AI’s alignment research? Who is the verifier when both researcher and subject are AI? The proof-of-concept is the first observable instance of this structural problem becoming operational. Anthropic’s framing presents it as productive (alignment research scales up). The structural framing acknowledges that productive scaling of alignment research and the verifier-and-system unity problem are happening simultaneously.


II · Six benchmarks. One pattern.

Clark presents the six benchmarks as discrete data points supporting the broader thesis. The structural read is that all six are converging on the same trajectory. Three observations:

The cascade is empirical. Six different skill benchmarks, measuring six different aspects of AI R&D capability, all showing 3-5× improvement per 12-18 months. CORE-Bench solved. MLE-Bench past human-bronze-medal threshold. Kernel design moving from research to production. PostTrainBench at half human baseline. CPU speedup at 13× human. Automated alignment research at proof-of-concept beating baseline. One benchmark could be noise. Six benchmarks moving together is a curve.

The measurement instruments are saturating. CORE-Bench declared solved. MLE-Bench leaderboard paused. METR’s task suite labeled unreliable above 16 hours. SWE-Bench Verified hitting noise-floor with Mythos Preview at 93.9%. The infrastructure for measuring AI capability is being out-grown by the models. This is itself a measurement: when the rate of capability progress exceeds the rate of benchmark development, the gap between published metrics and actual capability widens. We are likely measuring with instruments designed two years ago for capabilities that no longer constrain.

The convergence is structural, not coincidental. The six benchmarks measure different things, but they share a common underlying capability: AI doing the kinds of well-specified, verifiable-reward, optimization-shaped engineering work that constitutes the bulk of AI R&D. When you can do one of these tasks well, the same underlying capability handles the others. The convergence is what you’d expect when a general capability matures, not what you’d expect when separate capabilities improve independently.

This is the empirical evidence base for Clark’s “vast swatches, perhaps the entirety” conclusion. The conclusion is correct because the six skill trajectories all support it. The remaining question is whether the engineering skills generalize to research.


III · The 99% perspiration thesis

Clark closes his section on the “general relativity vs Lego” question with Edison’s framing: “genius is 1% inspiration and 99% perspiration.” The argument: AI is excellent at perspiration. AI may be limited on inspiration. But perspiration alone may be sufficient for AI to push itself forward — albeit at a slower rate than it would if it could also generate inspiration.

This is the right framing for the structural question. It’s also worth pushing on directly.

The Edison framing is about individual discovery. The 1% inspiration is the moment of insight; the 99% perspiration is making the insight work in practice. The framing assumes the two are separable activities — that you can identify “the moment of inspiration” as distinct from “the work of making it operational.” Edison’s own career arguably supports this — the discrete patentable inventions came from identifiable moments of insight, even if making each one work required years of subsequent grinding.

But in modern AI research, the inspiration-vs-perspiration distinction is more porous than Edison’s career suggests. Consider how the transformer architecture happened: Vaswani et al. were working through a series of attention mechanism experiments. The “Attention Is All You Need” insight emerged from iterating on existing attention work. The inspiration was not separable from the perspiration that produced it; the perspiration is what produced the inspiration. Similarly with mixture-of-experts, scaling laws, RLHF, constitutional AI — each “insight” emerged from extensive engineering work that produced both the insight and its first implementation simultaneously.

This matters for the AI-doing-AI-R&D question because it suggests inspiration may itself be a perspiration-derived phenomenon. If “inspiration” is what you call the most-productive subset of “perspiration” with hindsight, then automating perspiration at sufficient scale automatically increases the rate of inspiration. Not because AI invents new ideas in the romantic sense, but because at sufficient volume and variation of well-executed engineering work, novel patterns emerge from the search process.

This is the “more shots on goal” thesis. The Erdos data is consistent with it: 700 attempts, 13 solutions, 1 “interesting.” That’s a 0.14% rate of interesting insights from massive parallel exploration. If you can do 700,000 attempts, you might get 1,000 interesting insights. Whether each individual AI insight is “creative” in the human sense matters less than whether the aggregate output is. And the aggregate is what moves the field.

The honest read on the perspiration thesis: Clark says AI is great at perspiration and uncertain at inspiration, with perspiration alone being sufficient to push AI R&D forward at some rate. I’d add: if inspiration is partly a perspiration-derived phenomenon, then automating perspiration produces some amount of inspiration automatically. The rate may be slower than human researchers achieve per unit of effort, but the volume is dramatically higher. The net rate may be comparable or higher. This is the optimistic reading of the Erdos data and the centaur math discovery — AI generating novel results not via individual insight but via scale.


IV · The 1% inspiration question, with the available data

Clark provides three data points on the creative-spark question:

The Erdos problems: A team worked with Gemini to attack ~700 Erdos problems. They got 13 solutions. 1 was deemed “interesting” — Erdős-1051, which the team described as “an early example of an AI system autonomously resolving a slightly non-trivial open Erdős problem of somewhat broader (mild) mathematical interest.” The qualifications are striking: “slightly non-trivial,” “somewhat broader,” “mild.” This is a real result but a conservatively-framed one.

The centaur math discovery: A research team across UBC, UNSW, Stanford, and DeepMind published a new math proof with “very substantial input from Google Gemini and related tools.” The proof is real, published, and meaningfully advances some specific math problem. The framing is “centaur” — human + AI working together — not “AI alone.”

Move 37 from AlphaGo (2016): The canonical example of AI producing a creative-feeling insight. Clark’s observation: ten years later, Move 37 hasn’t been replaced by a comparably impressive flash of insight. This is a “weakly bearish signal” per Clark. The frequency of “Move 37 moments” hasn’t accelerated even as base AI capability has risen dramatically.

Reading this honestly: the creative-spark data is mixed. Yes-evidence: Erdős-1051, centaur math discovery, sporadic Move-37-style moments. No-evidence: low yield (700→1 interesting), framing dependence (centaur not autonomous), absence of acceleration in discovery moments.

My reading: the discovery moments are real but rare, and the rate isn’t visibly accelerating with capability. This is the most ambiguous of the available data points. It supports either reading:

  • Pessimistic for AI (inspiration is binding): rare discovery moments suggest creative insight is qualitatively distinct from engineering work, doesn’t scale with model capability, and may require something AI fundamentally lacks. In this reading, the engineering-vs-research distinction Clark draws is real and the research portion bounds the trajectory.
  • Optimistic for AI (inspiration is volume-derived): rare moments are an artifact of the small number of researchers running these experiments, not an artifact of AI capability. When the volume of AI-driven research scales, the discovery moments scale proportionally. In this reading, the engineering-vs-research distinction will dissolve as research becomes engineering at scale.

Neither reading is dispositive on current data. The empirical question resolves over the next 12-24 months as the volume of AI-driven research output increases. If Erdős-type rare discoveries scale roughly linearly with attempt volume, the optimistic reading is supported. If they don’t scale — if discovery requires something else — the pessimistic reading is supported. Both readings are consistent with Clark’s “vast swatches, perhaps the entirety, of AI engineering” automation claim. The two readings differ on the trajectory of automated AI research, which is the residual.


V · What Clark doesn’t develop

Clark’s section is rigorous on the empirical evidence and honest about the inspiration uncertainty. There are five strategic dimensions the section gestures at but doesn’t develop — each of which matters for the institutional response that the synthesis piece of my Clark series argues is structurally inadequate.

1 · The competitive lab dynamic

Each frontier lab is publishing capability data on AI doing AI R&D. Each publication is a competitive signal. Anthropic publishes the CPU speedup trajectory (2.9× → 52×) and the automated alignment research proof-of-concept. OpenAI publishes MLE-Bench and now manages the paused leaderboard. Google publishes centaur math discovery and Erdos work. The pattern across all three labs: capability disclosure as competitive positioning for the broader claim that “we have the most automated R&D pipeline.”

The strategic implication: labs that successfully automate R&D pull ahead faster than labs that don’t. The first lab to operationalize end-to-end automated AI R&D at production scale captures a structural advantage that compounds — their next model is trained by AI systems that are themselves more capable than the AI systems training the competitors’ next models. The race is not just on raw capability; it’s on the productivity of the R&D pipeline.

Clark’s essay doesn’t engage with this competitive dimension explicitly. The competitive dimension is part of why labs are publishing this data at all. The institutional response to AI-doing-AI-R&D needs to account for the fact that no individual lab can unilaterally slow down without losing the race — which is the coordination problem that the machine economy piece and synthesis piece develop.

2 · The interpretability gap

When AI is doing the R&D, humans understand less about how the next models are made. The hyperparameters, the training data composition, the optimization decisions, the architectural choices — all of these increasingly come from AI agents rather than from human researchers. The human researcher is the supervisor and reviewer, not the designer.

This is structurally the same problem alignment research is trying to solve at the model-output level, but at a different layer. Interpretability of model outputs assumes you know how the model was built. If the model was built by AI agents whose decisions are themselves not fully interpretable, the interpretability problem compounds. The automated alignment research proof-of-concept is meant to address this by having AI do alignment research on AI — but as noted in Section I, that brings the verifier-and-system unity problem.

Clark doesn’t engage with this explicitly. The institutional response should treat “we understand how our models are built” as a property that may be slipping as automated R&D scales. This has implications for safety evaluation, for regulatory disclosure, for IPO-level corporate disclosure (Anthropic’s Q4 2026 IPO needs language for this), and for the basic legitimacy of “this model was built by these humans following this process.”

3 · The brain drain question

What happens to human AI researchers when AI does the engineering? Three possible scenarios:

  • (a) Human researchers move up the abstraction stack — supervising AI agents, designing research directions, doing the inspiration work. Productive division of labor. Plausible for current senior researchers.
  • (b) Human researchers exit the field — the entry-level apprenticeship path through engineering schlep is closed (same dynamic as the junior software engineer market). Fewer humans entering AI research means fewer humans available to supervise AI agents in 5-10 years. Same “missing generation” problem.
  • (c) Human researchers concentrate at frontier labs — the labs with the most capable AI R&D agents need fewer human researchers per unit of output. Most human AI research happens at the small number of labs that operate the agents. Academic AI research becomes harder to compete with, accelerating the consolidation already underway.

Most likely outcome: a mix of all three. Senior researchers stay; entry-level researchers get displaced; remaining human AI talent concentrates at frontier labs. The labor displacement reality-check piece documents this dynamic for software engineering. AI research is the next sector up the chain.

4 · The volume thesis · more shots on goal

Section III argued that automating perspiration at sufficient volume may produce inspiration-equivalent outputs via search rather than insight. The strategic implication is that “compute for R&D” becomes a critical input. Whoever has the most compute available for AI-driven R&D exploration produces the most “shots on goal.” More shots = more rare discovery moments = faster effective rate of research progress.

This connects directly to the compute supply binding constraint from the synthesis piece. The compute geography is the research geography under the volume thesis. Labs that have privileged access to compute (Anthropic-SpaceX deal, OpenAI/Microsoft, Google internal capacity) capture the upside of the volume thesis. Labs that don’t can’t compete on volume of AI-driven research even if they have equal-capability models.

Clark doesn’t develop this explicitly. The volume thesis is the most important strategic add to Clark’s framing: if inspiration is volume-derived, the compute advantage that frontier labs are building isn’t just about training next-generation models; it’s about running enough AI-driven research exploration to find the discoveries that the human researchers wouldn’t reach by hand.

5 · The “AI does AI alignment research” recursive concern

The most consequential of Clark’s six benchmarks is also the most structurally concerning. Automated alignment research means AI is producing the alignment knowledge that AI is being aligned by. The compounding error piece addresses this at the model-generation level. At the research-output level, the recursive concern is similar but operates on a different timescale: does the alignment research community know what it knows, if a substantial fraction of that knowledge is produced by AI systems whose own alignment is the subject of the research?

This is not a hypothetical. Anthropic’s proof-of-concept demonstrates that the AI agents beat the human-designed baseline on scalable oversight specifically. Scalable oversight is one of the technical bottlenecks the alignment community considers most important. The fact that AI exceeded the human baseline on this specific problem is meaningful capability information AND meaningful structural information about who is producing alignment knowledge.

The institutional response needs to develop frameworks for evaluating alignment research produced by AI. Current peer review and publication frameworks weren’t designed for this. The alignment community is aware of the problem; the operational response is nascent. Clark’s essay establishes the empirical fact without developing the institutional consequence.


VI · Stakeholder implications

The structural read of Clark’s section has specific implications by audience:

For AI researchers in industry. The career trajectory needs to be calibrated to the engineering-vs-research distinction. Engineering work — kernel design, model training optimization, benchmark evaluation, paper reproduction — is being automated. Career value moves up the abstraction stack: research direction setting, supervision of AI agents, validation of AI-produced outputs, novel research problem identification. The senior-researcher-as-supervisor model is the durable role. The entry-level engineer-as-implementer role is depreciating. Plan for the supervisor role; treat the implementer role as table stakes.

For AI researchers in academia. The volume thesis matters most here. Academic labs with limited compute can’t compete on volume with frontier-lab automated R&D pipelines. The competitive response: focus on the inspiration-heavy work that doesn’t depend on volume — theoretical foundations, interpretability methodology, alignment frameworks, evaluation design. The areas where 1 deep insight beats 1000 quick experiments. Also: collaborate with frontier labs on the volume work, where academic novelty can leverage frontier-lab volume capacity.

For policymakers. The institutional response gap is widest here. The current policy framework treats AI R&D as something done by human researchers in regulated organizations. That framework breaks when AI agents do most of the R&D. New questions: who has liability for the outputs of automated AI R&D? Does corporate disclosure cover AI-produced research outputs? How do you regulate a research pipeline where the researcher and the subject are both AI systems? None of these questions have current answers. The framework needs to be built on the same 32-month window the synthesis piece describes.

For investors. Two specific implications. (a) Lab competition on automated R&D capability is the new productivity multiplier — labs with the best automated R&D pipelines pull ahead structurally. Anthropic’s CPU speedup trajectory (2.9× → 52×) is the publicly available signal; the rest of the pipeline is harder to evaluate but matters more. (b) Compute as research input — the volume thesis means compute capacity directly translates to research output velocity. Compute supply governance, geographic concentration, and capex commitments are all part of the AI research moat now, not just the AI training moat.

For everyone else. The coding singularity piece argued that the coding capability is the wedge into recursive self-improvement. This piece is the evidence that the wedge has already produced the capability set required for the recursive loop to be operational. The “vast swatches, perhaps the entirety, of AI engineering” claim from Clark is the public acknowledgment that we are inside the loop now. The remaining question is whether research bounds the trajectory or whether research is just engineering at scale. The next 12-24 months resolve this empirically. What gets built institutionally during that period determines what the equilibrium on the other side looks like.


VII · The structural read

Clark’s section is the most empirically grounded part of Import AI #455. The six benchmarks support the conclusion. The conclusion — “vast swatches, perhaps the entirety, of AI engineering” automated — is correct based on the public evidence.

The structural addition I’d make: research may not be a separate category that bounds the trajectory. The inspiration-vs-perspiration distinction is more porous than the Edison framing implies. If inspiration is partly a perspiration-derived phenomenon, then automating perspiration produces some amount of inspiration automatically. The volume thesis — more shots on goal yields rare discovery moments proportionally — is consistent with the available creativity data. The Erdős-1051 result, the centaur math discovery, the gradual accumulation of AI-assisted research outputs — all of these are consistent with research becoming engineering at scale rather than remaining a distinct category.

This matters because the institutional response calibrates to whether research is a permanent moat or a temporary one. If research is permanent, the trajectory bounds at the human-research-pace level and the next 32 months are productivity-multiplier years for human researchers. If research is temporary, the trajectory is automated R&D + automated AI research + the recursive self-improvement loop the coding singularity piece described, all operating together. The two scenarios differ by orders of magnitude in their consequences.

The honest read: the available data is consistent with both scenarios. PostTrainBench at half human baseline, Erdős discovery at low yield, centaur configuration prevalent. These data points support the “research is partly distinct” reading. The CPU speedup at 13× human, the volume thesis structural argument, the convergence pattern across six benchmarks — these data points support the “research is engineering at scale” reading.

My subjective read: research is partly distinct in the short term (12-24 months) but the distinction dissolves as automated R&D scales. The trajectory bounds at some point above human-research-pace and below pure-engineering-pace, with the cap likely at 10-30× human-researcher equivalent rather than 100×+. That’s still a transformative productivity multiplier on AI research. It’s also enough to make the Clark forecast — 60%+ probability of automated AI R&D by end of 2028 — substantially more plausible than the median outside observer would assign.

The institutional response should not bet on inspiration being a permanent moat. Building the policy framework, alignment research priorities, compute governance, and labor market transition support that the Clark forecast implies should proceed on the assumption that the engineering-vs-research distinction closes faster than the optimistic reading of Clark suggests. If the distinction holds longer than expected, the institutional capacity built during the window is still useful. If it closes faster than expected, the institutional capacity is necessary. The asymmetric cost-of-being-wrong points toward building the capacity now.

That is the structural read on Clark’s section from outside the frontier lab. The engineering is automated. The research is the residual. The residual is the question. The next 12-24 months produce the empirical data that resolves it.


About the Author

Thorsten Meyer is a Munich-based futurist, post-labor economist, and recipient of OpenAI’s 10 Billion Token Award. He spent two decades managing €1B+ portfolios in enterprise ICT before deciding that writing about the transition was more useful than managing quarterly slides through it. More at ThorstenMeyerAI.com.



Sources

  • Jack Clark · Import AI 455: Automating AI Research · “AI is getting good at core science skills essential to AI R&D” section · May 4, 2026 · jack-clark.net
  • CORE-Bench paper · arxiv.org/abs/2409.11363 · September 2024
  • CORE-Bench “solved” announcement · Sayash Kapoor · December 2025
  • MLE-Bench paper · openai/mle-bench · October 2024
  • MLE-Bench leaderboard pause · April 24, 2026
  • Anthropic CPU speedup trajectory · published in successive model cards · 2025-2026
  • Anthropic automated alignment research proof-of-concept · Import AI #454 · April 2026
  • PostTrainBench · Import AI #449 · March 2026
  • Centaur math discovery paper · UBC/UNSW/Stanford/DeepMind · Import AI #441
  • Erdos problems · Aletheia (Gemini-based) · Import AI #444
  • DeepSeek kernel optimization · Import AI #400
  • PyTorch-to-CUDA automation · Import AI #401
  • Meta Triton kernel work · Import AI #439
  • AscendCraft (Huawei Ascend kernel design) · Import AI #444
  • Cuda Agent · Import AI #448
  • AlphaGo Move 37 · DeepMind · 2016
  • Vaswani et al. · “Attention Is All You Need” · 2017
  • Thomas Edison · “Genius is 1% inspiration and 99% perspiration” · attributed quote, various sources
  • Neural architecture search · arxiv.org/abs/2301.08727

You May Also Like

Deutsche Telekom and NVIDIA’s €1 Billion “Industrial AI Cloud” Will Redefine European Manufacturing

By Thorsten Meyer | ThorstenMeyerAI.com Germany’s long-awaited “Industrial AI Cloud” is set…

From Generators to Generalists: Evidence that Video Models Are Zero‑Shot Learners and Reasoners

Thesis. The paper argues that large, generative video models trained at scale…

OpenAI’s ChatGPT Atlas: A Game-Changer Across Search, Advertising, Productivity, and More

Summary of Key Implications Below, we delve into each of these verticals…

GPT-5 Is Here: What OpenAI’s New Flagship Model Changes for Builders, Businesses, and Everyday Users

Published: August 7, 2025 (Europe/Berlin) TL;DR OpenAI launched GPT-5 today and pushed it broadly…