Comment by cogman10
3 years ago
I did in the original comment.
We have over 200 monolith applications each accessing overlapping schemas of data with their own sets of stored procedures, views, and direct queries. To migrate a portion of that data out into it's own database requires, generally, refactoring a large subset of the 200 monolith apps to no longer get all the data in one query, but rather a portion of the data with the query and the rest of the data with a new service.
Sharding the data is equally difficult because even tracing who is writing the data is spread from one side of the system to the next. We've tried to do that trough an elaborate system of views, but as you can imagine, those are too slow and cover too much data for some critical applications so they end up breaking the shard. That, in and of itself, introduces additional complexity with the evolution of the products.
Couple that with the fact that even with these solutions, getting a large portion of the organization is not on board with these solutions (why can't we JUST buy more hardware? Get JUST bigger databases?) and these efforts end up being sabotaged from the beginning because not everyone thinks it's a good idea (And if you think you are different, I suggest just looking at the rest of the comments here in HN that provide 20 different solutions to the problem some of which are "why can't you just buy more hardware?")
But, to add to all of this, we also just have organizational deficiencies that have really harmed these efforts. Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database. General for things like "the application isn't doing the right thing, so this cron job run every Wednesday will go in and fix things up" Quiet literally 1000s of those scripts have been written.
This isn't to say we've been 100% unsuccessful at splitting some of the data into it's own server. But, it's a long and hard slog.
>Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database.
This hits pretty hard right now, after reading this whole discussion.
When there is a galaxy with countless star systems of data its good to have locality owners of data who publish for their usage as domain leaders, and build a system that makes subscription and access grants frictionless.
100% agreed and that's what I've been trying to promote within the company. It's simply hard to get the momentum up to really affect this change. Nobody likes the idea that things have to get a little slower (because you add a new layer between the data) before they can get faster.
fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.
outside view: you should be trying to debottleneck your sql server if that's the plan the whole org can get behind. when they all want you to succeed you'll find a way.
> fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.
The brain is already shattered. This wouldn't "literally make them worse", instead it would say that "now instead of everyone in the world hitting the users table directly and adding or removing data from that table, we have one service in charge of managing users".
Far too often we have queries like
SELECT b.*, u.username FROM Bar b JOIN users u ON b.userId = u.id
And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.
> you should be trying to debottleneck your sql server if that's the plan the whole org can get behind.
Did you read my post? We absolutely HAVE been working, for years now, at "debottlenecking our sql server". We have a fairly large team of DBAs (about 30) who's whole job is "debottlenecking our sql server". What I'm saying is that we are, and have been, at the edge (and more often than not over the edge) of tipping over. We CAN'T buy our way out of this with new hardware because we already have the best available hardware. We already have read only replicas. We already have tried (and failed at) sharding the data.
The problem is data doesn't have stewards. As a result, we've spent years developing application code where nobody got in the way of saying "Maybe you shouldn't join these two domains together? Maybe there's another way to do this?"
assuming some beastly server with terabytes of ram, hundreds of fast cores, and an exotic io subsystem capable of ridiculous amounts of low latency iops, I'd guess the perf issue with that example is not sql server struggling with load but rather lock contention from the users table being heavily updated. unless that beast of a server is sitting pegged with a hardware bottleneck it can probably be debottlenecked by vertically partitioning the users table. ie: split the table into two (or more) to isolate the columns that change frequently from the ones that don't, replace the table with a view that joins it back together w/instead-of triggers conditionally updating the appropriate tables, etc. etc. then when this happens:
sql server sees that you're only selecting username from the users view and eliminates the joins for the more contentious tables and breathes easy peasy
> And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.
imo users should be able to do this and whatever else they want and it's not even unreasonable to want usernames for debugging purposes forever. I'd expect the db team to support the requirements of the apps teams and wouldn't want to have to get data from different sources
2 replies →