Comment by tecoholic
20 hours ago
I use 2 cli - Codex and Amp. Almost every time I need a quick change, Amp finishes the task in the time it takes Codex to build context. I think it’s got a lot to do with the system prompt and a the “read loop” as well, amp would read multiple files in one go and get to the task, but codex would crawl the files almost one by one. Anyone noticed this?
Which Gpt model and reasoning level did you use in Codex and Amp?
Generally I have noticed Gpt 5.2 codex is slower compared to Sonnet 4.5 in Claude Code.
Amp doesn't have a conventional model selector - you choose fast vs smart (I think that's what it is called).
In smart mode it explores with Gemini Flash and writes with Opus.
Opus is roughly the same speed as Codex, depending on thinking settings.
Amp uses Gemini 3 Flash to explore code first. That's model is a great speed/intelligence trade-off especially for that use case.
What is your general flow with amp? I plan to try it out myself and have been on the fences for a while.
I do the same thing with both. Nothing specific to Amp. But I have read it’s great for brainstorming and planning if I “ask oracle” - oracle being their tool that enables deep thinking. So I tend to use that when I think I have multiple solutions to something or the problem is big enough and I need to plan and break it down into smaller ones