← Back to context

Comment by sgarland

1 day ago

Legitimately asking, how? The only bottleneck should be the DB, and if you can saturate a 128-core DB, I want to see your queries and working set size. Not saying it can’t happen, but it’s rare that someone has actually maxed out MySQL or Postgres without there being some serious schema and query flaws, or just poor / absent tuning.

You’re thinking purely in terms of app performance. have you ever seen a terrible db schema? Having to suddenly iterate fast with a brittle codebase that doesnt really allow that ive seen bring teams to their knees for a year+.

I’ve seen monoliths because of their sheer size and how much crap and debt is packed into them, build and deploy processes taking several hours if not an entire day for some fix that could be ci/cd’d in seconds if it wasn’t such a ball of mud. Then, what tends to happen, is the infrastructure around it tends to compensate heavily for it, which turns into its own ball of mud. Nothing wrong with properly scaled monoliths but it’s a bit naive, in my personal experience, to just scoff at scale when your business succeeding relies on scale at some point. Don’t prematurely optimize, but don’t be oblivious to future scenarios, because they can happen quicker than you think

  • That was the reality of the fintech I worked at.

    The schema wasn't really a problem, but the sheer amount of queries per request. Often a user opening a page or clicking a button would cause 100-200 database queries, including updates. This would prevent strategies such as "just replicating the data somewhere". It was so badly architected that every morning the app would stop responding due to users doing their morning routine operations. And they only had around 300 employees.

    And this was just an internal app, the B2C part was already isolated because we couldn't afford to be offline.

    The solution I started working on was doing similar to the strangler fig pattern and replacing parts of the API with new code that talked directly to the ORM. Naturally this didn't made the people who wrote the legacy code happy, but at least the outages stopped.

    • sounds so similar to a fintech situation I’ve been in once I swear I was gonna say we worked at the same place, but the size of company is wrong. I’ve now seen pretty similar things since, enough to say it’s probably everywhere

      1 reply →

  • > have you ever seen a terrible db schema?

    I am a DBRE, so yes, unfortunately most days I see terrible schemata.

    > Having to suddenly iterate fast with a brittle codebase that doesnt really allow that ive seen bring teams to their knees for a year+.

    IME, the “let’s move fast” mindset causes further problems, because it’s rare that a dev has any inkling about proper data modeling, let alone RDBMS internals. What I usually see are heavily denormalized tables, UUIDs everywhere, and JSON taking the place of good modeling practices. Then they’re surprised when I tell them the issue can’t be fixed with yet another index, or a query rewrite. Turns out when you have the largest instance the cloud provider has, and your working set still doesn’t fit into memory, you’re gonna have a bad time.

For e-commerce, sure. But for telecom or IoT, it doesn’t take a large company to easily overrun the limits if what Postgres can do.