Comment by duskdozer
12 hours ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
12 hours ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
No comments yet
Contribute on Hacker News ↗