Comment by ryanschaefer

19 hours ago

It’s a watershed moment. Basically one of the most controlled applications of an LLM into a robust codebase without regard for the implications of doing so.

Anthropic needed something like this and it must proceed flawlessly. My guess is that nothing will explicitly break. But that’s the difficulty of LLM generated code: nothing breaks. You sit with a codebase that swallows all errors and appears to be working. Silently failing makes debugging performance and behavior much harder.