Comment by duskdozer
23 days ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
23 days ago
Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over
No comments yet
Contribute on Hacker News ↗