Comment by simonw
1 year ago
The prompt works because every interaction with an LLM is from a completely fresh state.
When you reply "write better code" what you're actually doing is saying "here is some code that is meant to do X. Suggest ways to improve that existing code".
The LLM is stateless. The fact that it wrote the code itself moments earlier is immaterial.
No comments yet
Contribute on Hacker News ↗