I recently asked Opus to just “Add vector search” to my current hobby project, a topic I know very little about. It set up manticore, pulled an embedding model, wrote a migration tool for my old keyword indices, and built the front end. I’m not exaggerating much either: the prompt was the length of a tweet.
I think it would easily have taken me 4+ hours to do that. It ran in 15 minutes while I played Kirby Air Riders and worked on the first try.
Afterward, I sort of had to reflect on the fact that I learned essentially nothing about building vector search. I wanted the feature more than I wanted to know how to build the feature. It kept me learning the thing I cared about rather than doing a side quest.
I don't think building it the long way is necessarily a more effective way to learn.
You could spend 4 hours (that you don't have) building that feature. Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works.
My hunch is that the 30 minutes of focused learning spent with a custom-built version that solves your exact problem is as effective (or even more effective) than four hours spent mostly struggling to get something up and running and going down various rabbit holes of unrelated problem-solving.
Especially if realistically you were never going to carve out those four hours anyway.
This feels like the exactly wrong way to think about it IMO. For me “knowledge” is not the explicit recitation of the correct solution, it’s all the implicit working knowledge I gain from trying different things, having initial assumptions fail, seeing what was off, dealing with deployment headaches, etc. As I work, I carefully pay attention to the outputs of all tools and try to mentally document what paths I didn’t take. That makes dealing with bugs and issues later on a lot easier, but it also expands my awareness of the domain, and checks my hubris on thinking I know something, and makes it possible to reason about the system when doing things later on.
Of course, this kind of interactive deep engagement with a topic is fast becoming obsolete. But the essence to me of “knowing” is about doing and experiencing things, updating my bayesian priors dialectically (to put it fancily)
Just speaking from personal experience but the struggle is what creates the learning.
I learned refactoring patterns from Fowler's book. But when I tried to actually use them I still struggled. I didn't fully understand how the patterns worked until I actually tried (and failed) to use them a few times.
You don't really internalize things until you understand what doesn't work just as much as what does. You don't learn nearly as much from success as you do from failure. I would say the ratio of truly internalized knowledge is much higher for failure.
The notion that you can get a bot to just vomit out a vector database and then you can just "read the code" and you'll understand how a vector database works is just ludicrous.
> Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works
I can only speak for myself, but the only way I've been able to learn things rapidly in this industry is by writing things myself: even rote re-typing of books or SO answers was enough to trigger this for me.
Just querying models and reading output doesn't seem to work for me, but that's maybe down to my particular learning style.
That's assuming everyone learns the same way, which isn't true. Watching a streamer beat a dark souls boss won't automatically make you competent at the game. Reading through gobs of code generated for you without knowing why various things were needed won't help either. A middle approach could be to get the LLM to guide you through the steps.
I don't know. I built a vector similarity system for my hobby project the "hard" way, which was mostly getting Python set up with all the dependencies (seriously, Python dependency resolution is a non-trivial problem), picking a model with the right tradeoffs, installing pgvector, picking an index that optimized my distance metric, calculating and storing vectors for all my data, and integrating routes and UI which dispatched ANN search (order by / limit) to my indexed column. I also did some clustering, and learned something of how awkward it is in practice to pick a representative vector for a cluster - and in fact you may want several.
I now know what the model does (at a black box level) and how all the parts fit together. And I have plans to build classifiers on top of the vectors I built for further processing.
The experience of fighting Python dependencies gives me more appreciation for uv over venv and will leave me less stuck whenever the LLM fails to help resolve the situation.
I buy the productivity argument, but I’m not convinced “30 minutes reading/tweaking agent output” is equivalent for learning to building it yourself.
If your goal is the feature, then yes: letting the agent do the heavy lifting and reviewing the diff afterward is a huge win.
But if your goal is understanding / skill-building, the hard part usually isn’t seeing a working solution. It’s doing the messy work of (a) making design choices, (b) getting stuck, (c) debugging, and (d) forming the mental model that lets you reproduce it later. Reviewing a correct implementation can create a feeling of “I get it,” but that feeling often doesn’t survive a blank file.
I’ve noticed this in my own hobby coding: LLMs are great for familiarity and unblocking progress, but the learning “sticks” much more when I’ve had to struggle through the failure modes myself. I’m watching the same dynamic play out with my son using ChatGPT to study for physics/calculus . . . it feels deep for him in the moment with the LLM, but exam-style transfer exposes the gaps.
Reading without actually doing does not really result in learning, only very marginal one.
Try reading tutorials on a new programming language for 30 minutes and then open new text file and write basic loop with print.
It won’t even compile- which shows you haven’t really learned anything. Just read an interesting story. Sure you pita few bits here and there but you still don’t know how to do even the moat basic thing.
This really makes for a good natural experiment: carry on :)
I have a hard time imagining how much you'd have to literally bribe me to get me to try doing it the way you describe. I'm too interested in implementation details of things and looking for innovations—in fact I make my living doing that, like some cyberpunk gremlin just delighting in messing with stuff in unexpected ways. I don't understand why you're not, but maybe it's not for me to understand.
Carry on. We'll check back and see how it worked for ya :)
Generally I agree with your takes and find them very reasonable but in this case I think your deep experience might be coloring your views a bit.
LLMs can hurt less experienced engineers by keeping them from building an intuition for why things work a certain way, or why an alternative won't work (or conversely, why an unconventional approach might not only be possible, but very useful and valuable!).
I think problem solving is optimization in the face of constraints. Generally using LLMs IME, the more you're able to articulate and understand your constraints, and prescriptively guide the LLM towards something it's capable of doing, the more effective they are and the more maintainable their output is for you. So it really helps to know when to break the rules or to create/do something unconventional.
Another way to put it is that LLMs have commodified conventional software so learning when to break or challenge convention is going to be where most of the valuable work is going forward. And I think it's hard to actually do that unless you get into the weeds and battle/try things because you don't understand why they won't work. Sometimes they do
Agree completely. The other aspect for me is that LLMs make me unafraid to take on initiatives in areas I know nothing about and/or am uninterested in pursuing due to discrepancy in effort vs reward. As a result I end up doing more and learning more.
Yes, it's a risk if you don't guide it well, but you can also manage it pretty ok.
I have a side project that I started in January 2024. Initially, used Github Copilot autocompletions heavily. This year I started using CLI agents (mostly Claude, but others too) to do more stuff. I got to around 100k LoC (sure, it's not enterprise scale, but for a personal project it's pretty big), but I'd argue it's maintainable, it's split into 10 Django apps, that are each pretty self contained, I've done several refactors on it (using AI agents) to make it more maintainable.
The point of eventual “all-code-is-written-by-AI” is that it really does not matter if your code is maintainable or not. In the end, most of the products are written to accomplish some sort of a goal or serve a need within a given set of restrictions (cost, speed and etc.). If the goal is achieved within given restrictions, the codebase can be thrown away until the next need is there to just create everything from scratch, if needed.
> I learned essentially nothing about building vector search. I wanted the feature more than I wanted to know how to build the feature
Opus/Anthropic is hands down the best in my experience. But using it feels like intellectual fast food (they all are), I hate the fact that I can build something like a neatly presentable one off spa tool (ty Simon) when I'm barely paying attention. it feels unsatisfying to use.
EDIT: because I'm rambling, I like "AI" as much as the next guy, probably more because I was there before it turned into LLMs"R"US, but I also like(d) the practice of sitting around listening to music solving problems with Scala. I don't know why we've decided to make work less fun..
I sort of disagree. It's somewhat like having hypercard again. You can build fun UI things and make machines do what you want them to do. You can care about the parts you want to care about and not sweat about the parts you don't want to learn in detail (yet). And Claude and codex make great guides/Sherpas.
There are just too many parts involved to do anything. For example today I built a simple data collection app to use on my phone that involves inventories with photos for a tedious workflow I have to do. I knew what I wanted but didn't know how to even choose which tools to bother learn. And just even trying things to see if an approach works or not without spending hours learning one thing or another or wading through the hell of web search is really great.
Things I learned today that I figure everyone else must know: if you want to take a photo from a webapp I guess you need https. So I decided to try mTLS (knew it existed but never had the time) so asked Claude to write me a short tutorial about setting it up, creating keys, importing them (including a cool single line trick of spinning up a python server and downloading the keys on my phone rather than find a USB stick or whatever). And then helping me figure out a path out of the suffering of Chrome and Firefox hating self-signed CA. But at least I figured out how to make Firefox happy. But it would insist on prompting me for the certificate for every htmx request. But chatting with Claude I learn caddy is pretty cool, it's go. Claude suggests an auth boxcar when I balk at adding auth and user management to my app because I think the webserver should handle all this shit (wtf is a boxcar? Claude clues me in). I tell Claude to use go or rust to build the boxcar because Jesus Christ "yay" build another service just to get a good damn customized CRUD app on my phone that can take a picture. Claude picks go which is fine by me. (Incidentally I can't write go, but I can read it and it's on my "to be learned" agenda and go seems safer than a pile of python for this simple thing) The boxcar was fine but Claude was struggling with getting headers to work in the caddy config. So while Claude is working on that I do a quick Google about whether caddy can have extensions because there has to be a better way to "if someone has authenticated successfully, give them a cookie that will last an hour so they don't have to mash the confirm about using the certificate for every goddamn htmx request" than spin up a web service. Interrupt Claude and suggest an extension instead of a boxcar. Claude's on board so we ditch the boxcar. Have Claude and codex evaluate the extension for security. They find important issues about things a jerk might do, fix them. So successful mTLS connections transition to session cookies. So my dumb CRUD tool doesn't have to worry about auth. Which it didn't have to do anyway except browsers say so etc because my phone is literally only able to access the server via VPN anyway.
Other things I have learned today that only wasted 5min of Claude's time rather than hours of mine: Firefox camera access can't control flash, focus or zoom. So call out to the native app instead.
This is all quite fun and the tool I'm building is going to really make my own life better.
> I wanted the feature more than I wanted to know how to build the feature
This is exactly what LLMs are great for. For instance, I'm looking at trading models. I want to think about buying and selling. I need some charts to look at, but I'm not a chart wizard. I can make basic charts, but it feels tedious to actually learn the model of how the charting software works. LLM will just give me the chart code for the visualization I want, and if I ever care to learn about it, I have it in a form that is relevant to me, not the form of the API documents.
In general, a lot of coding is like this. You have some end goal in mind, but there's a bunch of little things that need to be knitted together, and the knitting used to take a lot of time.
I like to say the LLM has reduced my toil while getting me to the same place. I can even do multiple projects at once, only really applying myself where there is a decision to be made, and it's all possible because I'm not sorting out the minutiae of some incidental API.
I like having the flexibility. If it's something I want to learn, I'll ask it to write some explanation into an md that I can read, and I can also look at the code diff in more detail. but if it's tedious things like interacting with the android sdk, I'll just let it do whatever it needs to do to get the feature working.
The result of you having worked 4 hours to implement the thing is not just that you have the thing, it's that you have the thing and you understand the thing. Having the thing is next to useless if you don't understand it.
At best it plods along as you keep badgering Claude to fix it, until inevitably Claude reaches a point where it can't help. At which time you'll be forced to spend at least the 4 hours you would have originally spent trying to understand it so you can fix it yourself.
At worst the thing will actively break other things you do understand in ways you don't understand, and you'll have to spend at least 4 hours cleaning up the mess.
Either way it's not clear you've saved any time at all.
Respectfully, I think I’m in a better position to decide a) what value this has to me and b) what I choose to learn vs just letting Opus deal with. You don’t have enough information to say if I’ve saved time because you don’t know what I’m doing or what my goals are.
You do learn how to control claude code and architect/orient things around getting it to deliver what you want. That's a skill that is both new and possibly going to be part of how we work for a long time (but also overlaps with the work tech leads and managers do).
My proto+sqlite+mesh project recently hit the point where it's too big for Claude to maintain a consistent "mental model" of how eg search and the db schemas are supposed to be structured, kept taking hacky workarounds by going directly to a db at the storage layer instead of the API layer, etc. so I hit an insane amount of churn trying to get it to implement some of the features needed to get it production ready.
But now I know some new tricks and intuition for avoiding this situation going forward. Because I do understand the mental model behind what this is supposed to look like at its core, and I need to maintain some kind of human-friendly guard rails, I'm adding integration tests in a different repo and a README/project "constitution" that claude can't change but is accountable for maintaining, and configuring it to keep them in context while working on my project.
Kind of a microcosm of startups' reluctance to institute employee handbook/kpis/PRDs followed by resignation that they might truly be useful coordination tools.
I didn't really understand the "long task" thing until I actually experienced it. The problem is finding a task you can set an agent that justifies working for that long. I finally hit one when I tried porting that Python HTML5 parser to JavaScript by pointing Codex CLI at the 9,200 html5lib-tests test suite: https://simonwillison.net/2025/Dec/15/porting-justhtml/
It's pretty amazing to watch tools-in-a-loop crunch away for >4 hours to solve a generally difficult problem through sheer brute-force.
To be clear this doesn't mean that it takes the AI > 4 hours to do the task. METR is measuring the difficulty of tasks by how long it takes a human to do the same task. This benchmark is saying that Opus 4.5 can now do tasks (related to AI R&D, coding foremost among them) that take human experts > 4 hours (at a 50% reliability level; whether that's actually useful depends on of course the cost of failure). It is silent on how long it takes AI systems to do those tasks. In theory an AI system could take longer than that (in practice it's usually significantly shorter).
This is of course quite highly correlated with an AI system being able to churn through a task for a long time. But it's not necessarily the same thing.
Of course the big questions are going to arise if/when we start passing lines like 8 hours (a whole work day) or 40 hours (a whole work week).
I think you might be misunderstanding the article actually, this is about AI solving tasks as measured by how long it takes a human to solve the task. The AI could potentially solve it much quicker, but the use of "human time to solve" is an attempt to create a metric that reveals long horizon complexity (as I understand it anyway).
It's interesting because like the article notes, AI is really smashing benchmarks, but actual usefulness in automation of thought work is proving much more elusive. I think that collective experience of AI just not being that useful, or as useful as benchmarks suggest it should be, is captured in this metric.
I've practiced a healthy skepticism of the recent boom but I can't reason why the long horizon time wouldn't stretch to 8 hours or a week worth's of effort from next year. After Opus-4.5, governments and organizations should really figure out a path out of this storm because we're in it now.
METR is using hours of equivalent human effort, not actual hours the agent itself spends, so by their methodology, your task might qualify as one where it pulls off much more than 4h of human work.
"Human hours equivalent" itself is an interesting metric, because: which human? Or rather, I'm sure they had a coherent definition in mind: presumably a human reasonably competent at whatever the specific task is. But hours the abstract human standard would spend is different from the hours any specific person, say you or I, would spend.
In particular, some of the appeal (and risk!!) of these things is precisely that you can ask for help with things that would be quick work for someone (who knows jq, or a certain corner of the PyPI library ecosystem, or modern CSS, or TypeScript annotations, or something else) but not for you.
The “50% time horizon” feels most actionable when you pair it with an expected-value model.
For a given task: EV ≈ (human_time_saved × $/hour) − (p_fail × cost_of_failure) − (iteration/oversight cost).
A model crossing 4h-at-50% might be hugely useful for low failure-cost work, and still net-negative for anything where rollback/debug is expensive. The missing piece is how p_fail scales with task length + how recoverable failures are.
The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years...
My problem with the OpenAI models (GPT5.2 in particular) recently is an extreme aversion to doing more than the smallest step in a task before asking for using input. Even if I explicitly instruct it to continue without input until the task is complete, it ignores the instruction.
I cannot imagine GPT5.2 working on a task for more than 2 minutes, let alone 4 hours. I’m curious if you’ve run into this and figured out a way around it?
I've not had that problem at all with GPT-5.2 running in Codex CLI.
I use prompts like this:
Build a pure JavaScript library (no dependencies) for encoding and
decoding this binary format. Start by looking at how the lite3-python
library works - the JavaScript one should have the same API and probably the
same code design too. Build the JS one in lite3-javascript - it should be a
single JavaScript module which works in both Node.js and in the browser.
There should be a test script that runs with Node.js which runs against the
files in the lite3-python/format_suite folder. Write the test script first,
run it and watch it fail, then build the JavaScript library and keep running
the tests until they pass.
I find that surprising. GPT 5.2 is the model I've had working the longest. It frequently works more than 4 hours nonstop, while earlier models would stop to ask if they should continue every 10 minutes. 5.1 and earlier ignores it if I ask it to continue until a task is done, but 5.2 will usually finish it.
How are you guys even doing long tasks with plain Codex or Claude code?
I use Claude code and I get hit with a permissions prompt every 2 seconds for anything I try to do.
Sure I can turn off all dangerous permissions but it'd probably honestly stop and claim it's finished well before it actually is in most cases from my experience.
To be fair I haven't tried codex so maybe it's better at this but I'm my experience almost every model stops at some point and claims victory or stops and tells me something like "next we'll continue on with XYZ" at which point I have to prompt it to continue.
You have to use --yolo or --dangerously-skip-permissions options.
Thankfully the cloud versions (Claude Code for web, Codex Cloud) run like that already, and are relatively safe in that if anything goes wrong it happens on someone else's computer.
Codex (at least 5 and 5.1) is bad at asking for permission. Whenever it wants to run pre-commit or platformio, it tries to do that, that fails because of the sandbox, and then Codex decides something is wrong with the cache directory and keeps asking for permission to sudo chown ~/.cache, every time.
I have to specifically tell it to request permission for the command it wants to run, and then it works. Very annoying, and very annoying that it can't persist the permission, like Claude Code can, so it doesn't have to ask again every single time.
Quickly looking at the source code, mostly treeBuilder and tokenizer, I do see several possible improvements:
- Use Typescript instead of JavaScript
- Use perfect hashes instead of ["a', "b", "c"].includes() idioms, string equalities, Seys, etc.
- Use a single perfect hash to match all tags/attribute names and then use enums in the rest of the codebase
- Use a single if (token.kind === Tag.START instead of repeating that for 10 consecutive conditionals
- Don't return the "reprocess" constant, but use an enum or perhaps nothing if "reprocess" is the only option
- Try tail recursion instead of a switch over the state in the tokenizer
- Use switches (best after a perfect hash lookup) instead of multiple ifs on characters in the tokenizer
- "treeBuilder.openElements = treeBuilder.open_elements;" can't possibly be good code
Perhaps the agent can find these themselves if told to make the code perfect and not just pass tests
I didn't include the TypeScript bit though - it didn't use TypeScript because I don't like adding a build step to my JavaScript projects if I can possible avoid it. The agent would happily have used TypeScript if I had let it.
I don't like that openElements = open_elements pattern either - it did that because I asked it for a port of a Python library and it decided to support the naming conventions for both Python and JavaScript at once. I told it to remove all of those.
It pushed back against the tail recursion suggestion:
> The current implementation uses a switch statement in step(). JavaScript doesn’t have proper tail call optimization (only Safari implements it), so true tail recursion would cause stack overflow on large documents.
You should take into consideration the time it took to make those 9200 tests originally. If you have good test coverage the agent can go much farther ahead.
I'm conflicted about opining on models: no individual has actually done a large sample of real-world tasks with a lot of models to be able to speak with authority, but I kinda think we should each share our dubiously-informed opinions anyway because benchmarks aren't necessarily representative of real-world use and many can clearly be gamed.
Anyhow, I noticed more of a difference trying Opus 4.5 compared to Sonnet 4.5 than I'd noticed from, for example, the last couple Sonnet bumps. Objectively, at 1.66x Sonnet's price instead of the old 5x, it's much more often practical to consider reaching for than past Opus models. Anthropic's basic monthly thing also covers a fair amount of futzing with it in CC.
At the other extreme, another surprise of this family is that Haiku 4.5 with reasoning on is usable: better than Sonnet with thinking off according to some bencharks, and in any case subjectively decent for point edits, single-page thingies, and small tools.
IMHO, in the software field, learning can be simpler to 2 phases. The first one is exploration, where we read blogs, docs, and books; listen to lectures and talks. Then comes the second phase of exploitation, where we actually use the thing we learned. You can think of all those “learning from scratch” videos as someone who is doing the phase 2. I love the phase one and most of the time don’t have time and energy to sit down and go through the phase 2. Nowadays, I feel like the 2 phases are combined, thanks to LLMs. For instance, I wanted to do some animation for visualizations. This week, I learned AnimeJS by watching CCAgent create the animation I wanted, which was interspersed with questions that were answered with diagrams and text, which accomplishes the phase 1. I do not like letting them run the show. Then comes phase 2, where I organize the code, abstract things, rewrite code, still use their help for long rewrites, but totally my ideas and mine only. This saves time tremendously.
Opus looks like a big jump from the previous leader (GPT 5.1), but when you switch from "50%" to "80%", GPT 5.1 still leads by a good margin. I'm not sure if you can take much from this - perhaps "5.1 is more reliable at slightly shorter stuff, choose Opus if you're trying to push the frontier in task length".
They should do a 95% and 99% version of the graphs, otherwise it's hard to ascertain whether the failure cases will remain in the elusive "stuff humans can do easily but LLM's trip up despite scaling"
> current models have almost 100% success rate on tasks taking humans less than 4 minutes
The contrary is easily verifiable by everyone individually. It's nowhere near 100%, or even 50% for few minutes tasks even with the best models in real world situations.
I've only noticed that combination (failure of short everyday tasks from SOTA models) on image comprehension, not text.
So some model will misclassify my American black nightshade* weeds as a tomato, but I get consistently OK results for text out from good models unless it's a trick question.
The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The other thing missing from these benchmarks: recovery ability. When the AI gets stuck on hour 3 of a 4-hour task, can it recognize the problem and backtrack? Or does it confidently continue down the wrong path?
You’ve only wasted the 4 hours if you didn’t spend them doing something else.
At 50/50 it’s an ok bet if the debugging time is much less than the total human time, even if the loops are long, you might rather 4 hours of deep work on an important human thing or on just relaxing vs babysitting the LLM. Assuming that about half the time that will pay off with a correctly done thing with very little effort, it’s kind of amazing.
> The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
> What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
Your first two paragraphs are at odds with each other. If it fails, you've potentially wasted the time it took the agent to *perform* the "it takes humans 4h" long task. Which in most cases is single digit minutes.
That's why one of the solid use cases for agents is doing multiple throw away proof of concepts to explore a problem / new feature before deciding on a solution to actually implement. Usually you'd have time for one, or maybe none. If it fails you've lost a maybe 10 minutes, but likely learned something new about the potential solution.
After spending many hours optimizing some routines I now think performance optimization is a great benchmark for identifiying how generally smart an AI is at helping with some specific piece of code.
Solutions are quite easy to verify with differential testing and produce a number for direct comparison.
Less code is usually better and you generally can't "cheat" by adding more cruft so it nullifies the additive bias. Good optimization requires significant understanding of the underlying structures. Everything has performance tradeoffs so it requires systemic thinking and not just stringing independent pieces together.
So far I've found that Gemini Pro 3 was the best at reasoning about tricky SIMD code but the results with most models were pretty underwhelming.
The big issue is the 50%, if you switch to 80% it's much less. Now if you are in the wrong side of 50% given the task was 4hours. How much additional time to 4hours you need. repeat trying to get the task done 50%*50%->25% , 50%^4 -> 6.25%. the cost of bad luck is very high.
It's it bad luck though? I would've thought that if AI can't solve it first try the probability of fixing it in second try would be higher/lower (depending on the task).
The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The other thing missing from these benchmarks: recovery ability. When the AI gets stuck on hour 3 of a 4-hour task, can it recognize the problem and backtrack? Or does it confidently continue down the wrong path?
> This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The problem with this approach is that in 30 minutes, an agent is able to produce a massive amount of stuff. Reviewing all this is a nightmare, in the sense that on the surface it seems fine and it often works, until it doesn't. The bugs introduced are often subtle and their effects manifest later, if ever.
So, for stuff that matters (to me), I prefer not to use agents at all.
Maybe things will change in a year, or 5, or 10. I will be giving it a try. but for the moment it's just not worth it, and the upside-down workflow it pushes on me is just making me tired and lose satisfaction from doing my job.
> As shown above, when we fit a similar trend to just the 2024 and 2025 data, this shortens the estimate of when AI can complete month-long tasks with 50% reliability by about 2.5 years.
I don't think I have 50% success rate at month long tasks.
How does "cost" per frontier task change with time?
Extrapolating any exponential growth is always dangerous, but over say 3 years at this pace, we'd go from 2 hours to 70,or about 8 days' work.
Quite scary. But what does cost do over the same timeline? Does it increase with computational complexity? Is it worse - because, IIRC, transformers computational cost is quadratic in context length. Is it better - some kind of economies of scale?
I glanced thought the article but couldn't find any info on this.
I read their citations (which are actually the same authors of this paper) and they also define using Python's built-in web server to "build a web server" as a long task.
I appreciate horizon expansion as a fundamental metric, but duration seems like too crude a measure. We used to like it when computers were fast.
An infinitely unscrupulous model provider could double this five hour result by cutting your output tokens/second in half!
This isn't only a question of gaming the metric: the very strong current small-fast models (4.5 Haiku, Gemini 3 Flash) have no hope of being measured fairly against this - they will succeed or fail much faster just because they are much faster.
How about something like total output token count as the "long term horizon" metric instead?
My introduction to this type of model measuring came from an interview where the repeatedly hammered-home point was that Sonnet 4.0 nailed a gigantic refactor (conversion of a large legacy asp.net or similar into react server-side components or similar) in a loop whose runtime was some large number of hours. I mistakenly attributed the same framing here.
Task duration is the time it would take for humans to complete the task. The speed of the models and how how long they might take to complete the task is not part of this metric.
I think the problem here is LLM eventually pollute its context window with so much of the current task that the larger picture or architectural sanity is forgotten in favor of the current task at hand.
And rarely is a software one and done, with a few round like this, the software architecture would have become schizophrenic. Combating this tendency usually require a lot of the work of these "long task" to be thrown away and more closely limiting what the AI is trying to do as they happen. The success of one "long task" is not necessarily a good thing!
This was why server-side compaction in GPT-5.2 was such a big deal. The model is by default provided with a tool that will prioritise the initial task and salient updates in context window compaction, and the new model has been trained to use it.
> We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months.
If true, how much of this is a result of:
1. Genuine technical advancement
or:
2. Shoveling trillions of dollars into compute resources in order to service incoming LLM requests in a way that is completely unrealistic over the long term?
In other words… are we talking about genuine, sustainable innovation that we get to take with us moving forward and benefit from? Or are we talking about an “improvement” that is more akin to a mirage that will eventually disappear when the Ponzi scheme eventually collapses?
Much of this is due to vastly better posttraining RL, not models that are much bigger. The idea that most of these gains comes from training really big models, or throwing immensely larger amounts of compute at it, is not really true.
I wonder how much of this stuff is attributable to true model advancement, or if it’s an improvement in the genetic harness? It’s impossible to separate strict model improvement from improvement in the associated tools.
They measure the time it takes a human to complete the task. They don't care how long the AI takes (although in practice it's much faster than human). Measuring tokens isn't a good idea because newer models can complete tasks using fewer tokens.
It's complicated. Opus 4.5 is actually not that good at the 80% threshold but is above others at 50% threshold of completion. I read there's a single task around 16h that the model completed, and the broad CI comes from that.
METR currently simply runs out of tasks at 10-20h, and as a result you have a small N and lots of uncertainty there. (They fit a logistic to the discrete 0/1 results to get the thresholds you see in the graph.) They need new tasks, then we'll know better.
Gemini is already the name of a Greek god, a constellation, a space mission, a crypto exchange, an astrological sign, a car, and a comic villain! How will we ever figure out which one someone is talking about?
I recently asked Opus to just “Add vector search” to my current hobby project, a topic I know very little about. It set up manticore, pulled an embedding model, wrote a migration tool for my old keyword indices, and built the front end. I’m not exaggerating much either: the prompt was the length of a tweet.
I think it would easily have taken me 4+ hours to do that. It ran in 15 minutes while I played Kirby Air Riders and worked on the first try.
Afterward, I sort of had to reflect on the fact that I learned essentially nothing about building vector search. I wanted the feature more than I wanted to know how to build the feature. It kept me learning the thing I cared about rather than doing a side quest.
I don't think building it the long way is necessarily a more effective way to learn.
You could spend 4 hours (that you don't have) building that feature. Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works.
My hunch is that the 30 minutes of focused learning spent with a custom-built version that solves your exact problem is as effective (or even more effective) than four hours spent mostly struggling to get something up and running and going down various rabbit holes of unrelated problem-solving.
Especially if realistically you were never going to carve out those four hours anyway.
This feels like the exactly wrong way to think about it IMO. For me “knowledge” is not the explicit recitation of the correct solution, it’s all the implicit working knowledge I gain from trying different things, having initial assumptions fail, seeing what was off, dealing with deployment headaches, etc. As I work, I carefully pay attention to the outputs of all tools and try to mentally document what paths I didn’t take. That makes dealing with bugs and issues later on a lot easier, but it also expands my awareness of the domain, and checks my hubris on thinking I know something, and makes it possible to reason about the system when doing things later on.
Of course, this kind of interactive deep engagement with a topic is fast becoming obsolete. But the essence to me of “knowing” is about doing and experiencing things, updating my bayesian priors dialectically (to put it fancily)
25 replies →
Just speaking from personal experience but the struggle is what creates the learning.
I learned refactoring patterns from Fowler's book. But when I tried to actually use them I still struggled. I didn't fully understand how the patterns worked until I actually tried (and failed) to use them a few times.
You don't really internalize things until you understand what doesn't work just as much as what does. You don't learn nearly as much from success as you do from failure. I would say the ratio of truly internalized knowledge is much higher for failure.
The notion that you can get a bot to just vomit out a vector database and then you can just "read the code" and you'll understand how a vector database works is just ludicrous.
6 replies →
> Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works
I can only speak for myself, but the only way I've been able to learn things rapidly in this industry is by writing things myself: even rote re-typing of books or SO answers was enough to trigger this for me.
Just querying models and reading output doesn't seem to work for me, but that's maybe down to my particular learning style.
2 replies →
>>My hunch is that the 30 minutes of focused learning spent with a custom-built version that solves your exact problem is as effective
My hunch is the exact opposite of this. You will learn close to nothing by reading this for 30 minutes.
That's assuming everyone learns the same way, which isn't true. Watching a streamer beat a dark souls boss won't automatically make you competent at the game. Reading through gobs of code generated for you without knowing why various things were needed won't help either. A middle approach could be to get the LLM to guide you through the steps.
1 reply →
I don't know. I built a vector similarity system for my hobby project the "hard" way, which was mostly getting Python set up with all the dependencies (seriously, Python dependency resolution is a non-trivial problem), picking a model with the right tradeoffs, installing pgvector, picking an index that optimized my distance metric, calculating and storing vectors for all my data, and integrating routes and UI which dispatched ANN search (order by / limit) to my indexed column. I also did some clustering, and learned something of how awkward it is in practice to pick a representative vector for a cluster - and in fact you may want several.
I now know what the model does (at a black box level) and how all the parts fit together. And I have plans to build classifiers on top of the vectors I built for further processing.
The experience of fighting Python dependencies gives me more appreciation for uv over venv and will leave me less stuck whenever the LLM fails to help resolve the situation.
It's the same hunch we all have when we think we're going to learn something by watching tutorials. We learn by struggling.
I buy the productivity argument, but I’m not convinced “30 minutes reading/tweaking agent output” is equivalent for learning to building it yourself.
If your goal is the feature, then yes: letting the agent do the heavy lifting and reviewing the diff afterward is a huge win.
But if your goal is understanding / skill-building, the hard part usually isn’t seeing a working solution. It’s doing the messy work of (a) making design choices, (b) getting stuck, (c) debugging, and (d) forming the mental model that lets you reproduce it later. Reviewing a correct implementation can create a feeling of “I get it,” but that feeling often doesn’t survive a blank file.
I’ve noticed this in my own hobby coding: LLMs are great for familiarity and unblocking progress, but the learning “sticks” much more when I’ve had to struggle through the failure modes myself. I’m watching the same dynamic play out with my son using ChatGPT to study for physics/calculus . . . it feels deep for him in the moment with the LLM, but exam-style transfer exposes the gaps.
3 replies →
Reading without actually doing does not really result in learning, only very marginal one.
Try reading tutorials on a new programming language for 30 minutes and then open new text file and write basic loop with print.
It won’t even compile- which shows you haven’t really learned anything. Just read an interesting story. Sure you pita few bits here and there but you still don’t know how to do even the moat basic thing.
1 reply →
This really makes for a good natural experiment: carry on :)
I have a hard time imagining how much you'd have to literally bribe me to get me to try doing it the way you describe. I'm too interested in implementation details of things and looking for innovations—in fact I make my living doing that, like some cyberpunk gremlin just delighting in messing with stuff in unexpected ways. I don't understand why you're not, but maybe it's not for me to understand.
Carry on. We'll check back and see how it worked for ya :)
1 reply →
Generally I agree with your takes and find them very reasonable but in this case I think your deep experience might be coloring your views a bit.
LLMs can hurt less experienced engineers by keeping them from building an intuition for why things work a certain way, or why an alternative won't work (or conversely, why an unconventional approach might not only be possible, but very useful and valuable!).
I think problem solving is optimization in the face of constraints. Generally using LLMs IME, the more you're able to articulate and understand your constraints, and prescriptively guide the LLM towards something it's capable of doing, the more effective they are and the more maintainable their output is for you. So it really helps to know when to break the rules or to create/do something unconventional.
Another way to put it is that LLMs have commodified conventional software so learning when to break or challenge convention is going to be where most of the valuable work is going forward. And I think it's hard to actually do that unless you get into the weeds and battle/try things because you don't understand why they won't work. Sometimes they do
5 replies →
Agree completely. The other aspect for me is that LLMs make me unafraid to take on initiatives in areas I know nothing about and/or am uninterested in pursuing due to discrepancy in effort vs reward. As a result I end up doing more and learning more.
The struggle is how you learn. I think that’s pretty much established scientifically by now?
6 replies →
You can spend 30 min, watching someone learning how to ski, you will learn something. You will not be able to ski by yourself though.
If you are not failing you are barely learning anything.
Yeah and then it becomes an unmaintainable monolith because at some point the AI also lost track of what code does what.
Great for Opus because you’re now a captive customer.
Yes, it's a risk if you don't guide it well, but you can also manage it pretty ok.
I have a side project that I started in January 2024. Initially, used Github Copilot autocompletions heavily. This year I started using CLI agents (mostly Claude, but others too) to do more stuff. I got to around 100k LoC (sure, it's not enterprise scale, but for a personal project it's pretty big), but I'd argue it's maintainable, it's split into 10 Django apps, that are each pretty self contained, I've done several refactors on it (using AI agents) to make it more maintainable.
The point of eventual “all-code-is-written-by-AI” is that it really does not matter if your code is maintainable or not. In the end, most of the products are written to accomplish some sort of a goal or serve a need within a given set of restrictions (cost, speed and etc.). If the goal is achieved within given restrictions, the codebase can be thrown away until the next need is there to just create everything from scratch, if needed.
15 replies →
If you don’t know that Opus isn’t an entity, but a model,
you might be a little too far removed from the situation to comment authoritatively?
> I learned essentially nothing about building vector search. I wanted the feature more than I wanted to know how to build the feature
Opus/Anthropic is hands down the best in my experience. But using it feels like intellectual fast food (they all are), I hate the fact that I can build something like a neatly presentable one off spa tool (ty Simon) when I'm barely paying attention. it feels unsatisfying to use.
EDIT: because I'm rambling, I like "AI" as much as the next guy, probably more because I was there before it turned into LLMs"R"US, but I also like(d) the practice of sitting around listening to music solving problems with Scala. I don't know why we've decided to make work less fun..
“We” didn’t decide to make work less fun, others decided for us.
I sort of disagree. It's somewhat like having hypercard again. You can build fun UI things and make machines do what you want them to do. You can care about the parts you want to care about and not sweat about the parts you don't want to learn in detail (yet). And Claude and codex make great guides/Sherpas.
There are just too many parts involved to do anything. For example today I built a simple data collection app to use on my phone that involves inventories with photos for a tedious workflow I have to do. I knew what I wanted but didn't know how to even choose which tools to bother learn. And just even trying things to see if an approach works or not without spending hours learning one thing or another or wading through the hell of web search is really great.
Things I learned today that I figure everyone else must know: if you want to take a photo from a webapp I guess you need https. So I decided to try mTLS (knew it existed but never had the time) so asked Claude to write me a short tutorial about setting it up, creating keys, importing them (including a cool single line trick of spinning up a python server and downloading the keys on my phone rather than find a USB stick or whatever). And then helping me figure out a path out of the suffering of Chrome and Firefox hating self-signed CA. But at least I figured out how to make Firefox happy. But it would insist on prompting me for the certificate for every htmx request. But chatting with Claude I learn caddy is pretty cool, it's go. Claude suggests an auth boxcar when I balk at adding auth and user management to my app because I think the webserver should handle all this shit (wtf is a boxcar? Claude clues me in). I tell Claude to use go or rust to build the boxcar because Jesus Christ "yay" build another service just to get a good damn customized CRUD app on my phone that can take a picture. Claude picks go which is fine by me. (Incidentally I can't write go, but I can read it and it's on my "to be learned" agenda and go seems safer than a pile of python for this simple thing) The boxcar was fine but Claude was struggling with getting headers to work in the caddy config. So while Claude is working on that I do a quick Google about whether caddy can have extensions because there has to be a better way to "if someone has authenticated successfully, give them a cookie that will last an hour so they don't have to mash the confirm about using the certificate for every goddamn htmx request" than spin up a web service. Interrupt Claude and suggest an extension instead of a boxcar. Claude's on board so we ditch the boxcar. Have Claude and codex evaluate the extension for security. They find important issues about things a jerk might do, fix them. So successful mTLS connections transition to session cookies. So my dumb CRUD tool doesn't have to worry about auth. Which it didn't have to do anyway except browsers say so etc because my phone is literally only able to access the server via VPN anyway.
Other things I have learned today that only wasted 5min of Claude's time rather than hours of mine: Firefox camera access can't control flash, focus or zoom. So call out to the native app instead.
This is all quite fun and the tool I'm building is going to really make my own life better.
Is there a better way to do this: probably.
2 replies →
> I wanted the feature more than I wanted to know how to build the feature
This is exactly what LLMs are great for. For instance, I'm looking at trading models. I want to think about buying and selling. I need some charts to look at, but I'm not a chart wizard. I can make basic charts, but it feels tedious to actually learn the model of how the charting software works. LLM will just give me the chart code for the visualization I want, and if I ever care to learn about it, I have it in a form that is relevant to me, not the form of the API documents.
In general, a lot of coding is like this. You have some end goal in mind, but there's a bunch of little things that need to be knitted together, and the knitting used to take a lot of time.
I like to say the LLM has reduced my toil while getting me to the same place. I can even do multiple projects at once, only really applying myself where there is a decision to be made, and it's all possible because I'm not sorting out the minutiae of some incidental API.
I like having the flexibility. If it's something I want to learn, I'll ask it to write some explanation into an md that I can read, and I can also look at the code diff in more detail. but if it's tedious things like interacting with the android sdk, I'll just let it do whatever it needs to do to get the feature working.
Can we see that vector search code or use it?
The result of you having worked 4 hours to implement the thing is not just that you have the thing, it's that you have the thing and you understand the thing. Having the thing is next to useless if you don't understand it.
At best it plods along as you keep badgering Claude to fix it, until inevitably Claude reaches a point where it can't help. At which time you'll be forced to spend at least the 4 hours you would have originally spent trying to understand it so you can fix it yourself.
At worst the thing will actively break other things you do understand in ways you don't understand, and you'll have to spend at least 4 hours cleaning up the mess.
Either way it's not clear you've saved any time at all.
Respectfully, I think I’m in a better position to decide a) what value this has to me and b) what I choose to learn vs just letting Opus deal with. You don’t have enough information to say if I’ve saved time because you don’t know what I’m doing or what my goals are.
2 replies →
You do learn how to control claude code and architect/orient things around getting it to deliver what you want. That's a skill that is both new and possibly going to be part of how we work for a long time (but also overlaps with the work tech leads and managers do).
My proto+sqlite+mesh project recently hit the point where it's too big for Claude to maintain a consistent "mental model" of how eg search and the db schemas are supposed to be structured, kept taking hacky workarounds by going directly to a db at the storage layer instead of the API layer, etc. so I hit an insane amount of churn trying to get it to implement some of the features needed to get it production ready.
Here's the whackamole/insanity documented in git commit history: https://github.com/accretional/collector/compare/main...feat...
But now I know some new tricks and intuition for avoiding this situation going forward. Because I do understand the mental model behind what this is supposed to look like at its core, and I need to maintain some kind of human-friendly guard rails, I'm adding integration tests in a different repo and a README/project "constitution" that claude can't change but is accountable for maintaining, and configuring it to keep them in context while working on my project.
Kind of a microcosm of startups' reluctance to institute employee handbook/kpis/PRDs followed by resignation that they might truly be useful coordination tools.
2 replies →
> inevitably Claude reaches a point where it can't help.
Perhaps not. If LLMs keep getting better, more competent models can help him stay on top of it lol.
2 replies →
Well, look through it's log and what it did and if you don't understand anything ask it why it did it/what it does.
I didn't really understand the "long task" thing until I actually experienced it. The problem is finding a task you can set an agent that justifies working for that long. I finally hit one when I tried porting that Python HTML5 parser to JavaScript by pointing Codex CLI at the 9,200 html5lib-tests test suite: https://simonwillison.net/2025/Dec/15/porting-justhtml/
It's pretty amazing to watch tools-in-a-loop crunch away for >4 hours to solve a generally difficult problem through sheer brute-force.
To be clear this doesn't mean that it takes the AI > 4 hours to do the task. METR is measuring the difficulty of tasks by how long it takes a human to do the same task. This benchmark is saying that Opus 4.5 can now do tasks (related to AI R&D, coding foremost among them) that take human experts > 4 hours (at a 50% reliability level; whether that's actually useful depends on of course the cost of failure). It is silent on how long it takes AI systems to do those tasks. In theory an AI system could take longer than that (in practice it's usually significantly shorter).
This is of course quite highly correlated with an AI system being able to churn through a task for a long time. But it's not necessarily the same thing.
Of course the big questions are going to arise if/when we start passing lines like 8 hours (a whole work day) or 40 hours (a whole work week).
I think you might be misunderstanding the article actually, this is about AI solving tasks as measured by how long it takes a human to solve the task. The AI could potentially solve it much quicker, but the use of "human time to solve" is an attempt to create a metric that reveals long horizon complexity (as I understand it anyway).
It's interesting because like the article notes, AI is really smashing benchmarks, but actual usefulness in automation of thought work is proving much more elusive. I think that collective experience of AI just not being that useful, or as useful as benchmarks suggest it should be, is captured in this metric.
I've practiced a healthy skepticism of the recent boom but I can't reason why the long horizon time wouldn't stretch to 8 hours or a week worth's of effort from next year. After Opus-4.5, governments and organizations should really figure out a path out of this storm because we're in it now.
6 replies →
METR is using hours of equivalent human effort, not actual hours the agent itself spends, so by their methodology, your task might qualify as one where it pulls off much more than 4h of human work.
"Human hours equivalent" itself is an interesting metric, because: which human? Or rather, I'm sure they had a coherent definition in mind: presumably a human reasonably competent at whatever the specific task is. But hours the abstract human standard would spend is different from the hours any specific person, say you or I, would spend.
In particular, some of the appeal (and risk!!) of these things is precisely that you can ask for help with things that would be quick work for someone (who knows jq, or a certain corner of the PyPI library ecosystem, or modern CSS, or TypeScript annotations, or something else) but not for you.
The “50% time horizon” feels most actionable when you pair it with an expected-value model. For a given task: EV ≈ (human_time_saved × $/hour) − (p_fail × cost_of_failure) − (iteration/oversight cost). A model crossing 4h-at-50% might be hugely useful for low failure-cost work, and still net-negative for anything where rollback/debug is expensive. The missing piece is how p_fail scales with task length + how recoverable failures are.
1 reply →
>which human
The second graph has this under it:
The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years...
2 replies →
My problem with the OpenAI models (GPT5.2 in particular) recently is an extreme aversion to doing more than the smallest step in a task before asking for using input. Even if I explicitly instruct it to continue without input until the task is complete, it ignores the instruction.
I cannot imagine GPT5.2 working on a task for more than 2 minutes, let alone 4 hours. I’m curious if you’ve run into this and figured out a way around it?
I've not had that problem at all with GPT-5.2 running in Codex CLI.
I use prompts like this:
1 reply →
I find that surprising. GPT 5.2 is the model I've had working the longest. It frequently works more than 4 hours nonstop, while earlier models would stop to ask if they should continue every 10 minutes. 5.1 and earlier ignores it if I ask it to continue until a task is done, but 5.2 will usually finish it.
What agent framework are you using? It can differ from one to the next on the same model.
1 reply →
How are you guys even doing long tasks with plain Codex or Claude code?
I use Claude code and I get hit with a permissions prompt every 2 seconds for anything I try to do.
Sure I can turn off all dangerous permissions but it'd probably honestly stop and claim it's finished well before it actually is in most cases from my experience.
To be fair I haven't tried codex so maybe it's better at this but I'm my experience almost every model stops at some point and claims victory or stops and tells me something like "next we'll continue on with XYZ" at which point I have to prompt it to continue.
You have to use --yolo or --dangerously-skip-permissions options.
Thankfully the cloud versions (Claude Code for web, Codex Cloud) run like that already, and are relatively safe in that if anything goes wrong it happens on someone else's computer.
Codex (at least 5 and 5.1) is bad at asking for permission. Whenever it wants to run pre-commit or platformio, it tries to do that, that fails because of the sandbox, and then Codex decides something is wrong with the cache directory and keeps asking for permission to sudo chown ~/.cache, every time.
I have to specifically tell it to request permission for the command it wants to run, and then it works. Very annoying, and very annoying that it can't persist the permission, like Claude Code can, so it doesn't have to ask again every single time.
Quickly looking at the source code, mostly treeBuilder and tokenizer, I do see several possible improvements: - Use Typescript instead of JavaScript - Use perfect hashes instead of ["a', "b", "c"].includes() idioms, string equalities, Seys, etc. - Use a single perfect hash to match all tags/attribute names and then use enums in the rest of the codebase - Use a single if (token.kind === Tag.START instead of repeating that for 10 consecutive conditionals - Don't return the "reprocess" constant, but use an enum or perhaps nothing if "reprocess" is the only option - Try tail recursion instead of a switch over the state in the tokenizer - Use switches (best after a perfect hash lookup) instead of multiple ifs on characters in the tokenizer - "treeBuilder.openElements = treeBuilder.open_elements;" can't possibly be good code
Perhaps the agent can find these themselves if told to make the code perfect and not just pass tests
Thanks for the feedback - I pasted it into a Claude Code session on my phone, here's the resulting PR: https://github.com/simonw/justjshtml/pull/7
I didn't include the TypeScript bit though - it didn't use TypeScript because I don't like adding a build step to my JavaScript projects if I can possible avoid it. The agent would happily have used TypeScript if I had let it.
I don't like that openElements = open_elements pattern either - it did that because I asked it for a port of a Python library and it decided to support the naming conventions for both Python and JavaScript at once. I told it to remove all of those.
I had it run a micro benchmark too against the before and after - here's the code it used for that: https://github.com/simonw/justjshtml/blob/a9dbe2d7c79522a76f...
After applying your suggestions:
It pushed back against the tail recursion suggestion:
> The current implementation uses a switch statement in step(). JavaScript doesn’t have proper tail call optimization (only Safari implements it), so true tail recursion would cause stack overflow on large documents.
You should take into consideration the time it took to make those 9200 tests originally. If you have good test coverage the agent can go much farther ahead.
Heh, I mostly use AI in the opposite direction to write tests because:
1. That’s the part of development work I hate the most and never really clicked with me
2. AI to to this point seems to be better at writing tests than code
Take this with the grain of salt that:
1. I suck
2. My work is mostly in the realm of infrastructure where testing has always been weird and a little dumb
2 replies →
Simon have you got to the point where you just don’t read the article?
Others have pointed out your interpretation of long task is not the same as the article.
Maybe this is the negative effects of excessive LLM usage that are spoken about.
They were right. I hadn't read enough of the article to understand what was meant by multi-hour tasks. I upvoted them for pointing that out.
2 replies →
What's more amazing is how fast your account empties when they do that.
it's $200/month for the "unlimited" plan.
2 replies →
I'm conflicted about opining on models: no individual has actually done a large sample of real-world tasks with a lot of models to be able to speak with authority, but I kinda think we should each share our dubiously-informed opinions anyway because benchmarks aren't necessarily representative of real-world use and many can clearly be gamed.
Anyhow, I noticed more of a difference trying Opus 4.5 compared to Sonnet 4.5 than I'd noticed from, for example, the last couple Sonnet bumps. Objectively, at 1.66x Sonnet's price instead of the old 5x, it's much more often practical to consider reaching for than past Opus models. Anthropic's basic monthly thing also covers a fair amount of futzing with it in CC.
At the other extreme, another surprise of this family is that Haiku 4.5 with reasoning on is usable: better than Sonnet with thinking off according to some bencharks, and in any case subjectively decent for point edits, single-page thingies, and small tools.
IMHO, in the software field, learning can be simpler to 2 phases. The first one is exploration, where we read blogs, docs, and books; listen to lectures and talks. Then comes the second phase of exploitation, where we actually use the thing we learned. You can think of all those “learning from scratch” videos as someone who is doing the phase 2. I love the phase one and most of the time don’t have time and energy to sit down and go through the phase 2. Nowadays, I feel like the 2 phases are combined, thanks to LLMs. For instance, I wanted to do some animation for visualizations. This week, I learned AnimeJS by watching CCAgent create the animation I wanted, which was interspersed with questions that were answered with diagrams and text, which accomplishes the phase 1. I do not like letting them run the show. Then comes phase 2, where I organize the code, abstract things, rewrite code, still use their help for long rewrites, but totally my ideas and mine only. This saves time tremendously.
Opus looks like a big jump from the previous leader (GPT 5.1), but when you switch from "50%" to "80%", GPT 5.1 still leads by a good margin. I'm not sure if you can take much from this - perhaps "5.1 is more reliable at slightly shorter stuff, choose Opus if you're trying to push the frontier in task length".
Yeah. 50% of the time to throw away expensive tokens and limits is not ideal. But I bet by this time next year OSS models will be at that capability!
They should do a 95% and 99% version of the graphs, otherwise it's hard to ascertain whether the failure cases will remain in the elusive "stuff humans can do easily but LLM's trip up despite scaling"
> current models have almost 100% success rate on tasks taking humans less than 4 minutes
The contrary is easily verifiable by everyone individually. It's nowhere near 100%, or even 50% for few minutes tasks even with the best models in real world situations.
I've only noticed that combination (failure of short everyday tasks from SOTA models) on image comprehension, not text.
So some model will misclassify my American black nightshade* weeds as a tomato, but I get consistently OK results for text out from good models unless it's a trick question.
* I recon, at least; looked like this to me: https://en.wikipedia.org/wiki/Solanum_americanum#/media/File...
The research from Metr, and my comment, is exclusively related to software development tasks.
1 reply →
The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The other thing missing from these benchmarks: recovery ability. When the AI gets stuck on hour 3 of a 4-hour task, can it recognize the problem and backtrack? Or does it confidently continue down the wrong path?
You’ve only wasted the 4 hours if you didn’t spend them doing something else.
At 50/50 it’s an ok bet if the debugging time is much less than the total human time, even if the loops are long, you might rather 4 hours of deep work on an important human thing or on just relaxing vs babysitting the LLM. Assuming that about half the time that will pay off with a correctly done thing with very little effort, it’s kind of amazing.
> The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
> What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
Your first two paragraphs are at odds with each other. If it fails, you've potentially wasted the time it took the agent to *perform* the "it takes humans 4h" long task. Which in most cases is single digit minutes.
That's why one of the solid use cases for agents is doing multiple throw away proof of concepts to explore a problem / new feature before deciding on a solution to actually implement. Usually you'd have time for one, or maybe none. If it fails you've lost a maybe 10 minutes, but likely learned something new about the potential solution.
After spending many hours optimizing some routines I now think performance optimization is a great benchmark for identifiying how generally smart an AI is at helping with some specific piece of code.
Solutions are quite easy to verify with differential testing and produce a number for direct comparison.
Less code is usually better and you generally can't "cheat" by adding more cruft so it nullifies the additive bias. Good optimization requires significant understanding of the underlying structures. Everything has performance tradeoffs so it requires systemic thinking and not just stringing independent pieces together.
So far I've found that Gemini Pro 3 was the best at reasoning about tricky SIMD code but the results with most models were pretty underwhelming.
The big issue is the 50%, if you switch to 80% it's much less. Now if you are in the wrong side of 50% given the task was 4hours. How much additional time to 4hours you need. repeat trying to get the task done 50%*50%->25% , 50%^4 -> 6.25%. the cost of bad luck is very high.
It's it bad luck though? I would've thought that if AI can't solve it first try the probability of fixing it in second try would be higher/lower (depending on the task).
The key insight from this benchmark is using "human-equivalent hours" rather than actual AI execution time. It's measuring capability complexity, not speed.
What's interesting is the 50% vs 80% reliability gap. At 50% success rate on a 4-hour task, you're essentially gambling. If it fails, you've potentially wasted the 4 hours plus the time debugging why it failed.
This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The other thing missing from these benchmarks: recovery ability. When the AI gets stuck on hour 3 of a 4-hour task, can it recognize the problem and backtrack? Or does it confidently continue down the wrong path?
> This is why I think the current "agent" paradigm needs human checkpoints at regular intervals. Let the AI work for 30 minutes, then review progress. Repeat. This way you catch drift early before it compounds.
The problem with this approach is that in 30 minutes, an agent is able to produce a massive amount of stuff. Reviewing all this is a nightmare, in the sense that on the surface it seems fine and it often works, until it doesn't. The bugs introduced are often subtle and their effects manifest later, if ever.
So, for stuff that matters (to me), I prefer not to use agents at all.
Maybe things will change in a year, or 5, or 10. I will be giving it a try. but for the moment it's just not worth it, and the upside-down workflow it pushes on me is just making me tired and lose satisfaction from doing my job.
> As shown above, when we fit a similar trend to just the 2024 and 2025 data, this shortens the estimate of when AI can complete month-long tasks with 50% reliability by about 2.5 years.
I don't think I have 50% success rate at month long tasks.
Anything that exceeds one day is pretty hard.
> We believe this work has important implications ... > First, our work demonstrates an approach ...
The Conclusions section is not for making a sales pitch for your article. It is for summarizing any new knowledge the article brings out.
How does "cost" per frontier task change with time?
Extrapolating any exponential growth is always dangerous, but over say 3 years at this pace, we'd go from 2 hours to 70,or about 8 days' work.
Quite scary. But what does cost do over the same timeline? Does it increase with computational complexity? Is it worse - because, IIRC, transformers computational cost is quadratic in context length. Is it better - some kind of economies of scale?
I glanced thought the article but couldn't find any info on this.
Would be interesting to see Gemini 3.0 Pro benchmarked as well.
Exactly. I don't understand how an article like this ignores the best models out there.
This article was published a long time ago, in March.
That's true, but it looks like it's been updated since then because the benchmarks include Claude Opus 4.5
"Train adversarially robust image model" is not a long task imo
I read their citations (which are actually the same authors of this paper) and they also define using Python's built-in web server to "build a web server" as a long task.
For folks interested in some of the nuances of this benchmark, I just posted this deep dive:
https://blog.sshh.io/p/understanding-ai-benchmarks
This seems like a good way to measure LLM improvement.
It matches the my personal feeling when using progressively better models over time.
I appreciate horizon expansion as a fundamental metric, but duration seems like too crude a measure. We used to like it when computers were fast.
An infinitely unscrupulous model provider could double this five hour result by cutting your output tokens/second in half!
This isn't only a question of gaming the metric: the very strong current small-fast models (4.5 Haiku, Gemini 3 Flash) have no hope of being measured fairly against this - they will succeed or fail much faster just because they are much faster.
How about something like total output token count as the "long term horizon" metric instead?
The time (horizon) here is not that of the model completing the task, but a human completing the task.
Wow that was a garbage comment!
My introduction to this type of model measuring came from an interview where the repeatedly hammered-home point was that Sonnet 4.0 nailed a gigantic refactor (conversion of a large legacy asp.net or similar into react server-side components or similar) in a loop whose runtime was some large number of hours. I mistakenly attributed the same framing here.
Task duration is the time it would take for humans to complete the task. The speed of the models and how how long they might take to complete the task is not part of this metric.
I think the problem here is LLM eventually pollute its context window with so much of the current task that the larger picture or architectural sanity is forgotten in favor of the current task at hand.
And rarely is a software one and done, with a few round like this, the software architecture would have become schizophrenic. Combating this tendency usually require a lot of the work of these "long task" to be thrown away and more closely limiting what the AI is trying to do as they happen. The success of one "long task" is not necessarily a good thing!
This was why server-side compaction in GPT-5.2 was such a big deal. The model is by default provided with a tool that will prioritise the initial task and salient updates in context window compaction, and the new model has been trained to use it.
Ask not what the agent can do you for you, ask what you can do for the agent.
If you fail to break up the task into agent sized chunks, you're the problem.
[dead]
> We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months.
If true, how much of this is a result of:
1. Genuine technical advancement
or:
2. Shoveling trillions of dollars into compute resources in order to service incoming LLM requests in a way that is completely unrealistic over the long term?
In other words… are we talking about genuine, sustainable innovation that we get to take with us moving forward and benefit from? Or are we talking about an “improvement” that is more akin to a mirage that will eventually disappear when the Ponzi scheme eventually collapses?
Much of this is due to vastly better posttraining RL, not models that are much bigger. The idea that most of these gains comes from training really big models, or throwing immensely larger amounts of compute at it, is not really true.
I wonder how much of this stuff is attributable to true model advancement, or if it’s an improvement in the genetic harness? It’s impossible to separate strict model improvement from improvement in the associated tools.
Good point.
Why measure in minutes and not tokens? Seems you could cheat by slowing the ai down.
They measure the time it takes a human to complete the task. They don't care how long the AI takes (although in practice it's much faster than human). Measuring tokens isn't a good idea because newer models can complete tasks using fewer tokens.
Big error bars and METR people are saying the longer end of the benchmark are less accurate right now. I think they mean this is a lower bound!
It's complicated. Opus 4.5 is actually not that good at the 80% threshold but is above others at 50% threshold of completion. I read there's a single task around 16h that the model completed, and the broad CI comes from that.
METR currently simply runs out of tasks at 10-20h, and as a result you have a small N and lots of uncertainty there. (They fit a logistic to the discrete 0/1 results to get the thresholds you see in the graph.) They need new tasks, then we'll know better.
Thanks for this comment. I've been trying to find anything about the huge error bars. Do you have any sources you can share for further reading?
Opus is already the name of an audio codec.
Gemini is already the name of a Greek god, a constellation, a space mission, a crypto exchange, an astrological sign, a car, and a comic villain! How will we ever figure out which one someone is talking about?
Opus: "an artistic work, especially one on a large scale."
The names Haiku, Sonnet, and Opus have not been chosen randomly.
And so much more intuitive than the OpenAI names for their models. I still don't get their naming scheme.
Have you been living under a rock?