Comment by dherls
5 days ago
Giving LLM agents direct, autonomous access to a real production databases with write access seems insane to me.
NO ONE, agent or human, should have direct write access to production databases outside of emergency break glass scenarios. This is why we have stored routines and API layers to pre-define what writes are allowed. The facts that agents CAN autonomously write to a database does not imply that they should.
For the point about query optimization, again your agents should not be issuing random queries against a production database. We have had the concept of separate analytics databases with different architectures to support exporatory queries for decades.
I agree and hope this is the case for anything serious enough. I also don't see this changing any time soon.
There are ways to give safe access to the data, at least read-only, that don't involve production risk and don't sacrifice privacy. For example, database branches with anonymization. Instead of accessing the prod/staging db, the agent creates a branch and has read/write access to that.
(disclaimer: I work at Xata, where we offer copy-on-write branches for Postgres, and the agent use-cases are the most popular right now)
I totally i agree! even read access specialty when databases has some sensitive/personal information about users.
I’m a DBRE. I spend a good portion of my day with a shell into one or more prod databases. The schema definitions in code are scattered between ORM model definitions, Alembic migrations, and Liquibase migrations, so the only reliable way I have of understanding a schema as it exists is to view it. Plus, I am very comfortable with SQL, and the various system catalogs of both MySQL and Postgres, so it’s a ton easier to work with.
Truly sensitive customer information is encrypted, and on an isolated DB cluster that no one has regular access to. I also operate with a read-only grant, because manual writes to a prod DB is generally a terrible idea.
So what do you do for "okay, we need to run this script that we've decided is a necessary operation". Special account? Everything go through the build server? I've been looking for tooling for "I need to do a production operation but I want it to have proper interlocks and reviews".
You can use something like flyway on top of your existing git/cicd stack. Write the query as a migration, have it reviewed using your git code review process, and merge to run the migration.
If it’s an incident, it’s usually manually run after review, with an audience. If it isn’t, it’s run as a script that goes through normal PR review.
1 reply →
> autonomous access to a real production databases
remember that filesystems are just sophisticated databases
And the same logic applies.
How does that even work in compliance-relevant scenarios where the audit trail shows some LLM messed with the data? Who, if anyone, is on the hook?
My guess is that if the database is subject to auditing then LLM access (obviously writes in particular, but even reads come with exfiltration risks) will be a hard "no" and instant red flag. When it's a person, there is a sense of accountability and opportunity for remediation.
I suppose that LLMs will be treated as a code artifact and liability will shift upstream towards who deployed/approved the access in the first place. Even though code is essentially deterministic, making that association fairly simple, it's going to boil down to this same paradigm.
Perhaps governance rules will evolve to even explicitly forbid it, but my gut feeling is that for what the future determines to be "practical" reasons (right or wrong) LLMs will warrant an entirely new set of rules to allow them to be in the chain at all.
+ EDIT: both my wife and I have experience in this area and the current answer is companies like KPMG don't have an answer yet. Existing rules do help (e.g. there better be good documented reasons why it was used and that access was appropriately scoped, etc), but there is enough ambiguity around these tools so they say "stay tuned, and take caution".
The dev who ran it. The manager who allowed it. The director/VP/CTO who enabled the culture. They all have some responsibility for it.
dilution of responsibility isn't just dangerous, it's illegal for some industries. Aircraft manufacturers need to log and track every single bolt, panel, and fastener on a plane back to the engineer who installed it. The moment you dilute risk between people you eliminate auditability.
[dead]
Whoever provided the authorization credentials to the agent is on the hook.
[dead]
LLM agents are unlocking demand and supply for applications that wouldn't have been possible before due to time constraints though. There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
This makes no sense whatsoever.
It's not news that if you just give all developers at a company write access to the production databases, owner permissions on all resources, etc. that velocity can be increased. But at what cost?
The reason we don't do that in most cases is that "move fast and break things" only makes sense for trivial, non-critical applications that don't have any real importance, like Facebook.
There's thousands of small and medium business though. They have maybe one true CRM, and a dozen spreadsheets/files floating around that would benefit becoming proper apps. People delete spreadsheets all the time!
Sure don't give an LLM agent write access to the modeled CRM that took months/years to build.
But turning a spreadsheet into an app in a few days? By giving the LLM proper read/write capabilities for velocity? I think the case is there for it. Right tool for the right job.
I think the argument would be mostly about the companies where such trivialities like proper auth were given up to maximum possible extent. I'm sure even some bigger ones are only gnashing their teeth over implementing security measures that are required by law and not seeing much point to it.
This comment is savage and I’m here for it.
> There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
This makes no sense to me. For anything that has sensitive payment or personally identifieable data, direct access to DB is potentially illegal.
> The failure/rollback model is much easier with these as long as we have good backup hygiene.
Have you actually operated systems like this in production? Even reverting to a DB state that is only seconds old can still lose hundreds or thousands of transactions. Which means loads of unhappy customers. More realistically, recovery points are often minutes or hours behind once you factor in detection, validation and operational overhead.
DB revert is for exceptional disaster recovery scenarios, not something you want in normal day-to-day operations. If you are saying that you want to give LLM full access to prod DB and then revert every time it makes a mistake, you aren't running a serious business.
This narrative seems to come from people who haven't worked on meaningfully complex software systems. They're more like script kiddies than software developers. I don't mean that in a derogatory manner. They're right that LLMs are unlocking new possibilities in the realm of their work. They just don't realize that these new possibilities are constrained to relatively simple applications, or very thin slices of complex systems.
I use an LLM to access my database occasionally, but never in production and never with write access. It is genuinely useful. It would never be useful in a production setting, though.
It's worth noting too that people should be wary of what a read only user means in database land. There are plenty of foot guns where writes can occur with read-like statements, and depending on the schema, maybe this would be a rollback-worthy situation. You really need to understand your database and schema before allowing an LLM anywhere near it, and you should be reviewing every query.
4 replies →
You are thinking way too hard. This person is a hazard that needs to learn the hard way.
If velocity means letting agents live edit a db, I'm fine being slow. Holy hell. Let these people crash and burn but definitely let me know the app name so I know never to use it first.
6 replies →
I'm not thinking of SaaS or properly built apps with an API, modeled databases, etc. I'm thinking spreadsheets/CSVs/MS Access that thousands of SMBs use to power their critical paths and someone accidentally deletes. Typically single user, maybe a small team. Infrequent writes, lots of reads.
But are those users allowed to see all the data in the databawe by the law? Some privacy laws require that personal information must be hidden from employees unless they have a narrow and specific business reason to view it. Blanket full access to a database may be illegal for that reason.
I think a lot of the objections to your post could be answered by reminding folks of how Microsoft Access databases tend to pop up in small businesses as well as corporate environments outside of IT departments. Yes, they're not "proper" databases but they /get business done/ and often serve as v0 before a real app can be properly conceived of.
One can easily imagine an LLM-enabled database that lets a wider audience build meat-and-potatoes line-of-business apps for small team use with minimal compliance concerns.
Yes, that's the right framing. Millions flow through spreadsheets/CSVs/MS Access with none of the auth/backups/architecture people seem to be stuck to.
I saw an article on HN one time about CSVs and how much business still flows through them. Reminds me of the xkcd comic about the one tiny block propping up lots of infrastructure. It stuck with me because it's ripe area for LLM agent based upgrades.
Sure don't give LLMs access to the well architected blocks. But not wanting to improve the brittle areas seems crazy to me even if it's contrarian.
> single user
If you're just vibe coding a tool for yourself, you don't have 'production database' at all even if you use database technology for storage. Just like many Android apps use local sqlite DBs but they're not production databases.
Of course in this case no traditional wisdom about production databases matters to you. In other words, it's off-topic.
I commented this elsewhere: There's thousands of small and medium business though. They have maybe one true CRM, and a dozen spreadsheets/files floating around that would benefit becoming proper apps. People delete spreadsheets all the time!
Sure don't give an LLM agent write access to the modeled CRM that took months/years to build.
But turning a spreadsheet into an app in a few days? By giving the LLM proper read/write capabilities for velocity? I think the case is there for it. Right tool for the right job.
1) Can you explain what demand and supply mean in this context?
2) In regards to having good backup hygiene, who is we?
I think of all the pent up demand for proper applications that are just infeasible when it would take a developer weeks-to-months to create. Now it's just a few days with an LLM agent.
Examples for me are all the apps that live in a spreadsheet, or in a MS Access database. Or all the crappy ad backed apps on the iOS app store. People wipe full spreadsheets all the time and backups are the only recovery.
Just last weekend I was frustrated with the poor quality of Pokedex type apps that spam ads left and right. Took just one session with Claude Opus to roll a custom Pokedex. It knew internally about things like the PokeApi dataset, Pokemon data modelling etc. To-the-hour snapshots of the database are trivial for bespoke apps like this so the LLM agent velocity seems like an okay trade off for me.
Clearly people don't agree...