Comment by ammut
3 days ago
I've spent quite a bit of time with Codex recently and come to the conclusion that you can't simply say "Let's add custom video controls around ReactPlayer." You need to follow up with a set of strict requirements to set expectations, guard rails, and what the final product should do (and not do). Even then it may have a few issues, but continuing to prompt with clearly stated problems that don't meet the requirements (or you forgot to include) usually clears it up.
Code that would have taken me a week to write is done in about 10 minutes. It's likely on average better than what I could personally write as a novice-mid level programmer.
>You need to follow up with a set of strict requirements to set expectations, guard rails, and what the final product should do (and not do).
that usually very hard to do part, and is was possible to spent few days on something like that in real word before LLMs. But with LLMs is worse because is not enough to have those requirements, some of those won't work for random reasons and is no any 'rules' that can grantee results. It always like 'try that' and 'probably this will work'.
Just recently I was struggled with same prompt produced different result between API calls before I realized that just usage of few more '\"' and few spaces in prompt leaded model to completely different route of logic which produced opposite answers.
By the time I have figured out all those quirks and guardrails I could have done it myself in 45min tops.
This is very true. But each iteration of learning quirks and installing guardrails carries value forward to later sessions. These rough edges get smoother with use, is my point.
It sounds like it takes you at least 10 minutes to just write the prompt with all the details you mentioned. Especially if you need to continue and prompt again (and again?).
Not the OP but, easily. My tasks are usually taking at least that, but up to hours of brainstorming and planning, sometimes I’ll do this over days in between other tasks just so I can think about all and pros and cons. Of course this has always been the way, but now I have an ongoing Claude session which I can come back to at any point, which is holding the context along with my brain. It’s much easier to keep the thread of what I’m working on across multiple tasks.
I mean, I typically do a lot more thinking than 10 minutes.
I’m writing some (for me) seriously advanced software that would have taken me months to write, in weeks, using Claude and ChatGPT.
It’s even unlikely I would be able to pull it off myself after a long days work.
The LLM doesn’t replace. It works in parallel.
> I’m writing some (for me) seriously advanced software that would have taken me months to write, in weeks, using Claude and ChatGPT.
Do you understand the code?
What was the speed up from months to weeks? You just didn't know what to type? Or you didn't know the problem domain? Or you found it hard to 'start' and the AI writing boiler plate gave you motivation?
In my experience with AI tools, it only really helps with ideation, most things it produces need heavy tweaking - to the point that there is no time savings. It's probably a net negative because I am spending all of my time thinking how to explain things to a dumb computer, rather than thinking about how to solve the problem.
1 reply →