Comment by senderista
8 hours ago
I don’t buy that LLMs won’t make off-by-one or memory safety errors, or that they won’t introduce undefined behavior. Not only can they not reason about such issues, but imagine how much buggy code they’re trained on!
No comments yet
Contribute on Hacker News ↗