Comment by gptfiveslow
17 hours ago
GPT5 is HELLISHLY slow. That's all there is to it.
It loves doing a whole bunch of reasoning steps and prolaim how mucf of a very good job it did clearing up its own todo steps and all that mumbo jumbo, but at the end of the day, I only asked it a small piece of information about nginx try_files that even GPT3 could answer instantly.
Maybe before you make reasoning models that go on funny little sidequests wher they multiply numbers by 0 a couple of times, make it so its good at identfying the length of a task. ntil then, I'll ask little bro and advance only if necessity arrives. And if it ends up gathering dust, well... yeah.
This. Speed determines whether I (like to) use a piece of software.
Imagine waiting for a minute until Google spits out the first 10 results.
My prediction: All AI models of the future will give an immediate result, with more and more innovation in mechanisms and UX to drill down further on request.
Edit: After reading my reply I realize that this is also true for interactions with other people. I like interacting with people who give me a 1 sentence response to my question, and only start elaborating and going on tangents and down rabbit holes upon request.
> All AI models of the future will give an immediate result, with more and more innovation in mechanisms and UX to drill down further on request.
I doubt it. In fact I would predict the speed/detail trade-off continues to diverge.
> Imagine waiting for a minute until Google spits out the first 10 results.
what if the instantaneous responses make you waste 10 min realizing they were not what you searched for?
I understand your point, but I still prefer instantaneous responses.
Only when the immediate answers become completely useless will I want to look into slower alternatives.
But first "show me what you've got so far", and let me decide whether it's good enough or not.
2 replies →
Grok fast is fast but doing a lot of stupid stuff fast actually ends up being slower
> It loves doing a whole bunch of reasoning steps
If you are talking about local models, you can switch that off. The reasoning is a common technique now to improve the accuracy of the output where the question is more complex.
Only Codex is slow. GPT 5 classic is fast
[dead]
The article(§) talks about going from Sonnet 4.5 back to Sonnet 4.0.
(§) You know that it's a hyperlink, do you? /s