Comment by stavros
12 hours ago
There are two ways to approach this. One is a priori: "If you aren't doing the same things with LLMs that humans do when writing code, the code is not going to work".
The other one is a posteriori: "I want code that works, what do I need to do with LLMs?"
Your approach is the former, which I don't think works in reality. You can write code that works (for some definition of "works") with LLMs without doing it the way a human would do it.
No comments yet
Contribute on Hacker News ↗