Comment by maxbond

3 days ago

Friends don't let friends run random untrusted code from the Internet. All code is presumed hostile until proven otherwise, even generated code. Giving an LLM write access to a production database is malpractice. On a long enough timeline, the likelihood of the LLM blowing up production approaches 1. This is the result you should expect.

> Yesterday was biggest roller coaster yet. I got out of bed early, excited to get back @Replit ⠕ despite it constantly ignoring code freezes

https://twitter-thread.com/t/1946239068691665187

This wasn't even the first time "code freeze" had failed. The system did them the courtesy of groaning and creaking before collapsing.

Develop an intuition about the systems you're building, don't outsource everything to AI. I've said before, unless it's the LLM who's responsible for the system and the LLM's reputation at stake, you should understand what you're deploying. An LLM with the potential to destroy your system violating a "code freeze" should cause you to change pants.

Credit where it is do, they did ignore the LLM telling them recovery was impossible and did recover their database. And eventually (day 10), they did accept that "code freeze" wasn't a realistic expectation. Their eventual solution was to isolate the agent on a copy of the database that's safe to delete.

Don't enter stranger's cars -> we got Uber

Don't run foreign code from the Internet -> we got LLMs