Comment by spoaceman7777
3 days ago
Depends whether you asked it to just write the code, or whether you asked it to evaluate the strategy, and write the code if nothing is ambiguous. My default prompt asks the model to provide three approaches to every request, and I pick the one that seems best. Models just follow directions, and the latest do it quite well, though each does have a different default level of agreeability and penchant for overdelivering on requests. (Thus the need to learn a model a bit and tweak it to match what you prefer.)
Overall though, I doubt a current SotA LLM would have much of an issue with understanding your request, and considering the nuances, assuming you provided it with your preferred approach to solving problems (considering ambiguities, and explicitly asking follow up questions for more information if it considers it necessary-- something that I also request in my default prompt).
In the end, what you get out is a product of what you put in. And using these tools is a non-trivial process that takes practice. The better people get with these tools, the better the results.
No comments yet
Contribute on Hacker News ↗