Comment by duskdozer
11 hours ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
11 hours ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
No comments yet
Contribute on Hacker News ↗