← Back to context

Comment by cfors

3 years ago

Yep, there's a premium on making your architecture more cloudy. However, the best point for Use One Big Server is not necessarily running your big monolithic API server, but your database.

Use One Big Database.

Seriously. If you are a backend engineer, nothing is worse than breaking up your data into self contained service databases, where everything is passed over Rest/RPC. Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).

It is so much easier to do these joins efficiently in a single database than fanning out RPC calls to multiple different databases, not to mention dealing with inconsistencies, lack of atomicity, etc. etc. Spin up a specific reader of that database if there needs to be OLAP queries, or use a message bus. But keep your OLTP data within one database for as long as possible.

You can break apart a stateless microservice, but there are few things as stagnant in the world of software than data. It will keep you nimble for new product features. The boxes that they offer on cloud vendors today for managed databases are giant!

> Use One Big Database.

> Seriously. If you are a backend engineer, nothing is worse than breaking up your data into self contained service databases, where everything is passed over Rest/RPC. Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).

This works until it doesn't and then you land in the position my company finds itself in where our databases can't handle the load we generate. We can't get bigger or faster hardware because we are using the biggest and fastest hardware you can buy.

Distributed systems suck, sure, and they make querying cross systems a nightmare. However, by giving those aspects up, what you gain is the ability to add new services, features, etc without running into scotty yelling "She can't take much more of it!"

Once you get to that point, it becomes SUPER hard to start splitting things out. All the sudden you have 10000 "just a one off" queries against several domains that are broken by trying carve out a domain into a single owner.

  • I don't know what's the complexity of your project, but more often than not the feeling of doom coming from hitting that wall is bigger than the actual effort it takes to solve it.

    People often feel they should have anticipated and avoid the scaling issues altogether, but moving from a single DB to master/replica model, and/or shards or other solutions is fairly doable, and it doesn't come with worse tradeoffs than if you sharded/split services from the start. It always feels fragile and bolt on compared to the elegance of the single DB, but you'd also have many dirty hacks to have a multi DB setup work properly.

    Also, you do that from a position where you usually have money, resources and a good knowledge of your core parts, which is not true when you're still growing full speed.

    • I can't speak for cogman10, but in my experience when you start to encounter issues of hitting the limit of "one big database" you are generally dealing with some really complicated shit and refactoring to dedicated read instances, shards, and other DB hacks are just short term solutions to buy time.

      The long term solutions end up being difficult to implement and can be high risk because now you have real customers (maybe not so happy because now slow db) and probably not much in house experience for dealing with such large scale data; and an absolute lack of ability to hire existing talent as the few people that really can solve for it are up to their ears in job offers.

      19 replies →

    • > I don't know what's the complexity of your project, but more often than not the feeling of doom coming from hitting that wall is bigger than the actual effort it takes to solve it.

      We've spent and failed at multiple multi year projects to "solve" the problem. I'm sure there are more simple problems that are easier to disentangle. But not in our case.

      43 replies →

    • One nice compromise is to migrate to using read-only database connections for read tasks from the moment you upgrade from medium sized DB hardware to big hardware. Keep talking to the one big DB with both connections.

      Then when you are looking at the cost of upgrading from big DB hardware to huge DB hardware, you've got another option available to compare cost-wise: a RW main instance and one more read-only replicas, where your monolith talks to both: read/write to the master and read-only to the replicas via a load balancer.

  • I've basically been building CRUD backends for websites and later apps since about 1996.

    I've fortunately/unfortunately never yet been involved in a project that we couldn't comfortably host using one big write master and a handful of read slaves.

    Maybe one day a project I'm involved with will approach "FAANG scale" where that stops working, but you can 100% run 10s of millions of dollars a month in revenue with that setup, at least in a bunch of typical web/app business models.

    Early on I did hit the "OMG, we're cooking our database" where we needed to add read cacheing. When I first did that memcached was still written in Perl. So that joined my toolbox very early on (sometime in the late 90s).

    Once read cacheing started to not keep up, it was easy enough to make the read cache/memcached layer understand and distribute reads across read slaves. I remember talking to Monty Widenius at The Open Source Conference, I think in Sad Jose around 2001 or so, about getting MySQL replication to use SSL so I could safely replicate to read slaves in Sydney and London from our write master in PAIX.

    I have twice committed the sin of premature optimisation and sharded databases "because this one was _for sure_ going to get too big for our usual database setup". It only ever brought unneeded grief and never actually proved necessary.

  • Many databases can be distributed horizontally if you put in the extra work, would that not solve the problems you're describing? MariaDB supports at least two forms of replication (one master/replica and one multi-master), for example, and if you're willing to shell out for a MaxScale license it's a breeze to load balance it and have automatic failover.

    • I worked at a mobile game company for years and years, and our #1 biggest scaling concern was DB write throughput. We used Percona's MySQL fork/patch/whatever, we tuned as best we could, but when it comes down to it, gaming is a write-heavy application rather than the read-heavy applications I'm used to from ecommerce etc.

      Sharding things out and replicating worked for us, but only because we were microservices-y and we were able to split our schemas up between different services. Still, there was one service that required the most disk space, the most write throughput, the most everything.

      (IIRC it was the 'property' service, which recorded everything anyone owned in our games and was updated every time someone gained, lost, or used any item, building, ally, etc).

      We did have two read replicas and the service didn't do reads from the primary so that it could focus on writes, but it was still a heavy load that was only solved by adding hardware, improving disks, adding RAM, and so on.

    • Not without big compromises and a lot of extra work. If you want a truly horizontally scaling database, and not just multi-master for the purpose of availability, a good example solution is Spanner. You have to lay your data out differently, you're very restricted in what kinds of queries you can make, etc.

      1 reply →

    • For what it's worth, I think distributing horizontally is also much easier if you're already limited your database to specific concerns by splitting it up in different ways. Sharding a very large database with lots of data deeply linked sounds like much more of a pain than something with a limited scope that isn't too deeply linked with data because it's already in other databases.

      To some degree, sharding brings in a lot of the same complexities as different microservices with their own data store, in that you sometimes have to query across multiple sources and combine in the client.

    • We've done that (MSSQL Always on) and that's what keeps the lights on today. It's not, however, something that'll remain sustainable.

  • Shouldn't your company have started to split things out and plan for hitting the limit of hardware a couple box sizes back? I feel there is a happy middle ground between "spend months making everything a service for our 10 users" and "welp i looks like we cant upsize the DB anymore, guess we should split things off now?"

  • Isn’t this easily solved with sharding ?

    That is, one huge table keyed by (for instance) alphabet and when the load gets too big you split it into a-m and n-z tables, each on either their own disk or their own machine.

    Then just keep splitting it like that. All of your application logic stays the same … everything stays very flat and simple … you just point different queries to different shards.

    I like this because the shards can evolve from their own disk IO to their own machines… and later you can reassemble them if you acquire faster hardware, etc.

  • > Once you get to that point, it becomes SUPER hard to start splitting things out.

    Maybe, but if you split it from the start you die by a thousand cuts, and likely pay the cost up front, even if you’d never get to the volumes that’d require a split.

  • >Once you get to that point, it becomes SUPER hard to start splitting things out. All the sudden you have 10000 "just a one off" queries against several domains that are broken by trying carve out a domain into a single owner.

    But that's survivorship bias and looking back at things from current problems perspective.

    You know what's the least future proof and scalable project ? The one that gets canceled because they failed to deliver any value in reasonable time in the early phase. Once you get to "huge project status" you can afford glacial pace. Most of the time you can't afford that early on - so even if by some miracle you knew what scaling issues you're going to have long term and invested in fixing them early on - it's rarely been a good tradeoff in my experience.

    I've seen more projects fail because they tangle themselves up in unnecessary complexity early on and fail to execute on core value proposition, than I've seen fail from being unable to manage the tech debt 10 years in. Developers like to complain about the second, but they get fired on the first kind. Unfortunately in todays job market they just resume pad their failures as "relevant experience" and move on to the next project - so there is not correcting feedback.

  • I'd be curious to know what your company does which generates this volume of data (if you can disclose), what database you are using and how you are planning to solve this issue.

    • Finance. MSSQL.

      There are multiple plans on how to fix this problem but they all end up boiling down to carving out domains and their owners and trying to pull apart the data from the database.

      What's been keeping the lights on is "Always On" and read only replicas. New projects aren't adding load to the db and it's simply been a slow going getting stuff split apart.

      What we've tried (and failed at) is sharding the data. The main issue we have is a bunch of systems reading directly from the db for common records rather than hitting other services. That means any change in structure requires a bunch of system wide updates.

      1 reply →

  • You can get a machine with multiple terabytes of ram and hundreds of CPU cores easily. If you can afford that, you can afford a live replica to switch to during maintenance.

    FastComments runs on one big DB in each region, with a hot backup... no issues yet.

    Before you go to microservices you can also shard, as others have mentioned.

  • Why can’t the databases handle the load? That is to say, did you see this coming from a while away or was it a surprise?

This is absolutely true - when I was at Bitbucket (ages ago at this point) and we were having issues with our DB server (mostly due to scaling), almost everyone we talked to said "buy a bigger box until you can't any more" because of how complex (and indirectly expensive) the alternatives are - sharding and microservices both have a ton more failure points than a single large box.

I'm sure they eventually moved off that single primary box, but for many years Bitbucket was run off 1 primary in each datacenter (with a failover), and a few read-only copies. If you're getting to the point where one database isn't enough, you're either doing something pretty weird, are working on a specific problem which needs a more complicated setup, or have grown to the point where investing in a microservice architecture starts to make sense.

  • One issue I've seen with this is that if you have a single, very large database, it can take a very, very long time to restore from backups. Or for that matter just taking backups.

    I'd be interested to know if anyone has a good solution for that.

    • Here's the way it works for, say, Postgresql:

      - you rsync or zfs send the database files from machine A to machine B. You would like the database to be off during this process, which will make it consistent. The big advantage of ZFS is that you can stop PG, snapshot the filesystem, and turn PG on again immediately, then send the snapshot. Machine B is now a cold backup replica of A. Your loss potential is limited to the time between backups.

      - after the previous step is completed, you arrange for machine A to send WAL files to machine B. It's well documented. You could use rsync or scp here. It happens automatically and frequently. Machine B is now a warm replica of A -- if you need to turn it on in an emergency, you will only have lost one WAL file's worth of changes.

      - after that step is completed, you give machine B credentials to login to A for live replication. Machine B is now a live, very slightly delayed read-only replica of A. Anything that A processes will be updated on B as soon as it is received.

      You can go further and arrange to load balance requests between read-only replicas, while sending the write requests to the primary; you can look at Citus (now open source) to add multi-primary clustering.

      14 replies →

    • Presumably it doesn't matter if you break your DB up into smaller DBs, you still have the same amount of data to back up no matter what. However, now you also have the problem of snapshot consistency to worry about.

      If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.

      1 reply →

    • Try out pg_probackup. It works on database files directly. Restore is as fast as you can write on your ssd.

      I've setup a pgsql server with timescaledb recently. Continuing backup based on WAL takes seconds each hour and a complete restore takes 15 minutes for almost 300 GB of data because the 1 GBit connection to the backup server is the bottleneck.

      1 reply →

  • What if your product simply stores a lot of data (ie a search engine) How is that weird?

    • That's fair - I added "are working on a specific problem which needs a more complicated setup" to my original comment as a nicer way of referring to edge cases like search engines. I still believe that 99% of applications would function perfectly fine with a single primary DB.

    • Depends what you mean by a database I guess. I take it to mean an RDBMS.

      RDBMSs provide guarantees that web searching doesn't need. You can afford to lose a pieces of data, provide not-quite-perfect results for web stuff. It's just wrong for an RDBMS.

      4 replies →

    • This is not typically going to be stored in an ACID-compliant RDBMS, which is where the most common scaling problem occurs. Search engines, document stores, adtech, eventing, etc. are likely going to have a different storage mechanism where consistency isn't as important.

    • a search engine won't need joins, but other things (ie text indexing) that can be split in a relatively easier way.

I'm glad this is becoming conventional wisdom. I used to argue this in these pages a few years ago and would get downvoted below the posts telling people to split everything into microservices separated by queues (although I suppose it's making me lose my competitive advantage when everyone else is building lean and mean infrastructure too).

In my mind, reasons involve keeping transactional integrity, ACID compliance, better error propagation, avoiding the hundreds of impossible to solve roadblocks of distributed systems (https://groups.csail.mit.edu/tds/papers/Lynch/MIT-LCS-TM-394...).

But also it is about pushing the limits of what is physically possible in computing. As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.

Physical efficiency is about keeping data close to where it's processed. Monoliths can make much better use of L1, L2, L3, and ram caches than distributed systems for speedups often in the order of 100X to 1000X.

Sure it's easier to throw more hardware at the problem with distributed systems but the downsides are significant so be sure you really need it.

Now there is a corollary to using monoliths. Since you only have one db, that db should be treated as somewhat sacred, you want to avoid wasting resources inside it. This means being a bit more careful about how you are storing things, using the smallest data structures, normalizing when you can etc. This is not to save disk, disk is cheap. This is to make efficient use of L1,L2,L3 and ram.

I've seen boolean true or false values saved as large JSON documents. {"usersetting1": true, "usersetting2":fasle "setting1name":"name" etc.} with 10 bits of data ending up as a 1k JSON document. Avoid this! Storing documents means, the keys, the full table schema is in every row. It has its uses but if you can predefine your schema and use the smallest types needed, you are gaining much performance mostly through much higher cache efficiency!

  • > I'm glad this is becoming conventional wisdom

    It's not though. You're just seeing the most popular opinion on HN.

    In reality it is nuanced like most real-world tech decisions are. Some use cases necessitate a distributed or sharded database, some work better with a single server and some are simply going to outsource the problem to some vendor.

    • > outsource the problem to some vendor

      At least that way you can be certain of failure.

    • Exactly. The HN crowd is obsessed with minimalism and reducing "bloat".

      It has become a cult, where availability and scale requirements are apparently fiction. "You are not FAANG, you don't have these requirements."

  • > I'm glad this is becoming conventional wisdom

    My hunch is that computers caught up. Back in the early 2000's horizontal scaling was the only way. You simply couldn't handle even reasonably mediocre loads on a single machine.

    As computing becomes cheaper, horizontal scaling is starting to look more and more like unnecessary complexity for even surprisingly large/popular apps.

    I mean you can buy a consumer off-the-shelf machine with 1.5TB of memory these days. 20 years ago, when microservices started gaining popularity, 1.5TB RAM in a single machine was basically unimaginable.

    • Honestly from my perspective it feels like microservices arose strongly in popularity precisely when it was becoming less necessary. In particular the mass adoption of SSD storage massively changed the nature of the game, but awareness of that among regular developers seemed not as pervasive as it should have been.

  • 'over the wire' is less obvious than it used to be.

    If you're in k8s pod, those calls are really kernel calls. Sure you're serializing and process switching where you could be just making a method call, but we had to do something.

    I'm seeing less 'balls of mud' with microservices. Thats not zero balls of mud. But its not a given for almost every code base I wander into.

    • To clarify, I think stateless microservices are good. It's when you have too many DBs (and sometimes too many queues) that you run into problems.

      A single instance of PostgreSQL is, in most situations, almost miraculously effective at coordinating concurrent and parallel state mutations. To me that's one of the most important characteristic of an RDBMS. Storing data is a simpler secondary problem. Managing concurrency is the hard problem that I need most help with from my DB and having a monolithic DB enables the coordination of everything else including stateless peripheral services without resulting in race conditions, conflicts or data corruption.

      SQL is the most popular mostly functional language. This might be because managing persistent state and keeping data organized and low entropy, is where you get the most benefit from using a functional approach that doesn't add more state. This adds to the effectiveness of using a single transactional DB.

      I must admit that even distributed DBs, like Cockroach and Yugabyte have recognized this and use the PostgreSQL syntax and protocol. This is good though, it means that if you really need to scale beyond PostgreSQL, you have PostgreSQL compatible options.

    • > I'm seeing less 'balls of mud' with microservices.

      The parallel to "balls of mud" with microservices is tiny services that seem almost devoid of any business logic and all the actual business logic is encapsulated in the calls between different services, lambda functions, and so on.

      That's quite nightmarish from a maintenance perspective too, because now it's almost impossible to look at the system from the outside and understand what it's doing. It also means that conventional tooling can't help you anymore: you don't get compiler errors if your lambda function calls an endpoint that doesn't exist anymore.

      Big balls of mud are horrible (I'm currently working with a big ball of mud monolith, I know what I'm talking about), but you can create a different kind of mess with microservices too. Then there all the other problems, such as operational complexity, or "I now need to update log4j across 30 services".

      In the end, a well-engineered system needs disciple and architectural skills, as well as a healthy engineering culture where tech debt can be paid off, regardless of whether it's a monolith, a microservice architecture or something in between.

    • > I'm seeing less 'balls of mud' with microservices. Thats not zero balls of mud.

      They are probably younger. Give them time :P

  • >"I'm glad this is becoming conventional wisdom. "

    Yup, this is what I've always done and it works wonders. Since I do not have bosses, just a clients I do not give a flying fuck about latest fashion and do what actually makes sense for me and said clients.

  • I've never understood this logic for webapps. If you're building a web application, congratulations, you're building a distributed system, you don't get a choice. You can't actually use transactional integrity or ACID compliance because you've got to send everything to and from your users via HTTP request/response. So you end up paying all the performance, scalability, flexibility, and especially reliability costs of an RDBMS, being careful about how much data you're storing, and getting zilch for it, because you end up building a system that's still last-write-wins and still loses user data whenever two users do anything at the same time (or you build your own transactional logic to solve that - exactly the same way as you would if you were using a distributed datastore).

    Distributed systems can also make efficient use of cache, in fact they can do more of it because they have more of it by having more nodes. If you get your dataflow right then you'll have performance that's as good as a monolith on a tiny dataset but keep that performance as you scale up. Not only that, but you can perform a lot better than an ACID system ever could, because you can do things like asynchronously updating secondary indices after the data is committed. But most importantly you have easy failover from day 1, you have easy scaling from day 1, and you can just not worry about that and focus on your actual business problem.

    Relational databases are largely a solution in search of a problem, at least for web systems. (They make sense as a reporting datastore to support ad-hoc exploratory queries, but there's never a good reason to use them for your live/"OLTP" data).

    • I really don't understand how anything of what you wrote follows from the fact that you're building a web-app. Why do you lose user data when two users do anything at the same time? That has never happened to me with any RDBMS.

      And why would HTTP requests prevent me from using transactional logic? If a user issues a command such as "copy this data (a forum thread, or a Confluence page, or whatever) to a different place" and that copy operation might actually involve a number of different tables, I can use a transaction and make sure that the action either succeeds fully or is rolled back in case of an error; no extra logic required.

      I couldn't disagree more with your conclusion even if I wanted to. Relational databases are great. We should use more of them.

      4 replies →

    • Http requests work great with relational dbs. This is not UDP. If the TCP connection is broken, an operation will either have finished or stopped and rolledback atomically and unless you've placed unneeded queues in there, you should know of success immediately.

      When you get the http response, you will know the data is fully committed, data that uses it can be refreshed immediately and is accessible to all other systems immediately so you can perform next steps relying on those hard guarantees. Behind the http request, a transaction can be opened to do a bunch of stuff including API calls to other systems if needed and commit the results as an atomic transaction. There are tons of benefit using it with http.

      2 replies →

  • >As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.

    Even accounting for CDNs, a distributed system is inherently more capable of bringing data closer to geographically distributed end users, thus lowering latency.

I think a strong test a lot of "let's use Google scale architecture for our MVP" advocates fail is: can your architecture support a performant paginated list with dynamic sort, filter and search where eventual consistency isn't acceptable?

Pretty much every CRUD app needs this at some point and if every join needs a network call your app is going to suck to use and suck to develop.

  • I’ve found the following resource invaluable for designing and creating “cloud native” APIs where I can tackle that kind of thing from the very start without a huge amount of hassle https://google.aip.dev/general

    The patterns section covers all of this and more

    • This is a great resource but the RFC-style documentation says what you SHOULD and MUST do, not HOW to do it ...

  • I don't believe you. Eventual consistency is how the real world works, what possible use case is there where it wouldn't be acceptable? Even if you somehow made the display widget part of the database, you can't make the reader's eyeballs ACID-compliant.

    • Yeah, I can attest that even banks are really using best effort eventual consistency. However, I think it is very difficult to reason about with systems that try to use eventual consistency as an abstraction. It's a lot easier to think about explicitly when you have one data source/event that propagates outwards through systems with stronger individual guarantees than eventual consistency.

      1 reply →

  • > if every join needs a network call your app is going to suck to use and suck to develop.

    And yet developers do this every single day without any issue.

    It is bad practice to have your authentication database be the same as your app database. Or you have data coming from SaaS products, third party APIs or a cloud service. Or even simply another service in your stack. And with complex schemas often it's far easier to do that join in your application layer.

    All of these require a network call and join.

    • > It is bad practice to have your authentication database be the same as your app database.

      No, this is resume-driven-development, Google-scale-wannabe FUD. Understand your requirements. Multiple databases is non-trivial overhead. The only reason to add multiple databases is if you need scale that can't be handled via simple caching.

      Of course it's hard to anticipate what level of scale you'll have later, but I can tell you this: for every tiny startup that successfully anticipated their scaling requirements and built a brilliant microservices architecture that proactively paved the way to their success, there's a 100 burnt out husks of companies that never found product market fit because the engineering team was too busy fantasizing about "web-scale" and padding their resume by overengineering every tiny and unused feature they built.

      If you want to get a job at FAANG and suckle at the teat of megacorporations who's trajectory was all based on work done in the early 2000s, by all means study up on "best practices" to recite at your system design interview. On the other hand, if you want to build the next great startup, you need to lose the big co mentality and start thinking critically from first principles about power to weight ratio and YAGNI.

    • > And yet developers do this every single day without any issue.

      And users suffer through unresponsive interfaces and long load times every single day...

      1 reply →

  • > Pretty much every CRUD app needs this at some point and if every join needs a network call your app is going to suck to use and suck to develop.

    _at some point_ is the key word here.

    Most startups (and businesses) can likely get away with this well into Series A or Series B territory.

  • thanks a lot for this comment. I will borrow this as an interview question :)

> Use One Big Database.

I emphatically disagree.

I've seen this evolve into tightly coupled microservices that could be deployed independently in theory, but required exquisite coordination to work.

If you want them to be on a single server, that's fine, but having multiple databases or schemas will help enforce separation.

And, if you need one single place for analytics, push changes to that space asynchronously.

Having said that, I've seen silly optimizations being employed that make sense when you are Twitter, and to nobody else. Slice services up to the point they still do something meaningful in terms of the solution and avoid going any further.

  • I have done both models. My previous job we had a monolith on top of a 1200 table database. Now I work in an ecosystem of 400 microservices, most with their own database.

    What it fundamentally boils down to is that your org chart determines your architecture. We had a single team in charge of the monolith, and it was ok, and then we wanted to add teams and it broke down. On the microservices architecture, we have many teams, which can work independently quite well, until there is a big project that needs coordinated changes, and then the fun starts.

    Like always there is no advice that is absolutely right. Monoliths, microservices, function stores. One big server vs kubernetes. Any of those things become the right answer in the right context.

    Although I’m still in favor of starting with a modular monolith and splitting off services when it becomes apparent they need to change at a different pace from the main body. That is right in most contexts I think.

    • > splitting off services when it becomes apparent they need to change at a different pace from the main body

      yes - this seems to get lost, but the microservice argument is no different to the bigger picture software design in general. When things change independently, separate and decouple them. It works in code and so there is no reason it shouldn't apply at the infrastructure layer.

      If I am responsible for the FooBar and need to update it once a week and know I am not going to break the FroggleBot or the Bazlibee which are run by separate teams who don't care about my needs and update their code once a year, hell yeah I want to develop and deploy it as a separate service.

  • To clarify the advice, at least how I believe it should be done…

    Use One Big Database Server…

    … and on it, use one software database per application.

    For example, one Postgres server can host many databases that are mostly* independent from each other. Each application or service should have its own database and be unaware of the others, communicating with them via the services if necessary. This makes splitting up into multiple database servers fairly straightforward if needed later. In reality most businesses will have a long tail of tiny databases that can all be on the same server, with only bigger databases needing dedicated resources.

    *you can have interdependencies when you’re using deep features sometimes, but in an application-first development model I’d advise against this.

    • ">Use One Big Database Server…

      … and on it, use one software database per application.<"

      FWIW that is how it is usually is done(and has been done for decades) on mainframes (IBM & UNISYS).

      -----------------------

      "Plus ça change, plus c'est la même chose."

      English: "the more things change, the more they stay the same."

      - old French expression.

  • There's no need for "microservices" in the first place then. That's just logical groupings of functionality that can be separate as classes, namespaces or other modules without being entirely separate processes with a network boundary.

  • Yeah... Dividing your work into microservices while your data is in an interdependent database doesn't lead to great results.

    If you are creating microservices, you must segment them all the way through.

    • I have to say I disagree with this ... you can only separate them if they are really, truly independent. Trying to separate things that are actually coupled will quickly take you on a path to hell.

      The problem here is that most of the microservice architecture divisions are going to be driven by Conway's law, not what makes any technical sense. So if you insist on separate databases per microservice, you're at high risk of ending up with massive amounts of duplicated and incoherent state models and half the work of the team devoted to synchronizing between them.

      I quite like an architecture where services are split except the database, which is considered a service of its own.

      3 replies →

Breaking apart a stateless microservice and then basing it around a giant single monolithic database is pretty pointless - at that stage you might as well just build a monolith and get on with it as every microservice is tightly coupled to the db.

  • To note that quite a bit of the performance problems come when writing stuff. You can get away with A LOT if you accept 1. the current service doesn't do (much) writing and 2. it can live with slightly old data. Which I think covers 90% of use cases.

    So you can end up with those services living on separate machines and connecting to read only db replicas, for virtually limitless scalability. And when it realizes it needs to do an update, it either switches the db connection to a master, or it forwards the whole request to another instance connected to a master db.

  • That's true, unless you need

    (1) Different programming languages e.g. you're written your app in Java but now you need to do something for which the perfect Python library is available.

    (2) Different parts of your software need different types of hardware. Maybe one part needs a huge amount of RAM for a cache, but other parts are just a web server. It'd be a shame to have to buy huge amounts of RAM for every server. Splitting the software up and deploying the different parts on different machines can be a win here.

    I reckon the average startup doesn't need any of that, not suggesting that monoliths aren't the way to go 90% of the time. But if you do need these things, you can still go the microservices route, but it still makes sense to stick to a single database if at all possible, for consistency and easier JOINs for ad-hoc queries, etc.

    • These are both true - but neither requires service-oriented-architecture.

      You can split up your applicaiton into chunks that are deployed on seperate hardware, and use different languages, without composing your whole architecture into microservices.

      A monolith can still have a seperate database server and a web server, or even many different functions split across different servers which are horizontally scalable, and be written in both java and python.

      Monoliths have had seperate database servers since the 80s (and probably before that!). In fact, part of these applications defining characteristics at the enterprise level is that they often shared one big central database, as often they were composed of lots of small applications that would all make changes to the central database, which would often end up in a right mess of software that was incredibly hard to de-pick! (And all the software writing to that database would, as you described, be written in lots of different languages). People would then come along and cake these central databases full of stored procedures to make magic changes to implement functionality that wasn't available in the legacy applications that they can't change because of the risk and then you have even more of a mess!

  • Agree. Nothing worse than having different programs changing data in the same database. The database should not be an integration point between services.

    • if you have multiple micro services updating the database you need to have a database access layer service as well.

      there's some real value with abstraction and microservices but you can try to run them against a monolithic database service

      2 replies →

  • Why would you break apart a microservice? Any why do you need to use/split into microservices anyway?

    99% of apps are best fit as monolithic apps and databases and should focus on business value rather than scale they'll never see.

    • > 99% of apps are best fit as monolithic apps and databases and should focus on business value rather than scale they'll never see

      You incorrectly assume that 99% of apps are building these architectures for scalability reasons.

      When in reality it's far more for development productivity, security, use of third party services, different languages etc.

      3 replies →

    • Totally agree.

      I guess I just don't see the value in having a monolith made up of microservices - you might as well just build a monolith if you are going down that route.

      And if your application fits the microservices pattern better, then you might as well go down the microservices pattern properly and not give them a big central DB.

      1 reply →

    • Where I work we are looking at it because we are starting to exceed the capabilities of one big database. Several tables are reaching the billions of rows mark and just plain inserts are starting to become too much.

      3 replies →

  • I disagree. Suppose you have an enormous DB that's mainly written to by workers inside a company, but has to be widely read by the public outside. You want your internal services on machines with extra layers of security, perhaps only accessible by VPN. Your external facing microservices have other things like e.g. user authentication (which may be tied to a different monolithic database), and you want to put them closer to users, spread out in various data centers or on the edge. Even if they're all bound to one database, there's a lot to recommend keeping them on separate, light cheap servers that are built for http traffic and occasional DB reads. And even more so if those services do a lot of processing on the data that's accessed, such as building up reports, etc.

    • You've not really built microservices then in the purest sense though - i.e. all the microservices aren't independently deployable components.

      I'm not saying what you are proposing isn't a perfectly valid architectural approach - it's just usually considered an anti-pattern with microservices (because if all the services depend on a single monolith, and a change to a microservice functionality also mandates a change to the shared monolith which then can impact/break the other services, we have lost the 'independence' benefit that microservices supposedly gives us where changes to one microservice does not impact another).

      Monoliths can still have layers to support business logic that are seperate to the database anyway.

  • Absolutely. I know someone who considers "different domains" (as in web domains) to count as a microservice!

    What is the point of that? it doesn't add anything. Just more shit to remember and get right (and get wrong!)

> "Use One Big Database."

yah, this is something i learned when designing my first server stack (using sun machines) for a real business back during the dot-com boom/bust era. our single database server was the beefiest machine by far in the stack, 5U in the rack (we also had a hot backup), while the other servers were 1U or 2U in size. most of that girth was for memory and disk space, with decent but not the fastest processors.

one big db server with a hot backup was our best tradeoff for price, performance, and reliability. part of the mitigation was that the other servers could be scaled horizontally to compensate for a decent amount of growth without needing to scale the db horizontally.

Definitely use a big database, until you can't. My advice to anyone starting with a relational data store is to use a proxy from day 1 (or some point before adding something like that becomes scary).

When you need to start sharding your database, having a proxy is like having a super power.

  • Disclaimer: I am the founder of PolyScale [1].

    We see both use cases: single large database vs multiple small, decoupled. I agree with the sentiment that a large database offer simplicity, until access patterns change.

    We focus on distributing database data to the edge using caching. Typically this eliminates read-replicas and a lot of the headache that goes with app logic rewrites or scaling "One Big Database".

    [1] https://www.polyscale.ai/

  • Are there postgres proxies that can specifically facilitate sharding / partitioning later?

> Use One Big Database

Yep, with a passive replica or online (log) backup.

Keeping things centralized can reduce your hardware requirement by multiple orders of magnitude. The one huge exception is a traditional web service, those scale very well, so you may not even want to get big servers for them (until you need them).

If you do this then you'll have the hardest possible migration when the time comes to split it up. It will take you literally years, perhaps even a decade.

Shard your datastore from day 1, get your dataflow right so that you don't need atomicity, and it'll be painless and scale effortlessly. More importantly, you won't be able to paper over crappy dataflow. It's like using proper types in your code: yes, it takes a bit more effort up-front compared to just YOLOing everything, but it pays dividends pretty quickly.

  • This is true IFF you get to the point where you have to split up.

    I know we're all hot and bothered about getting our apps to scale up to be the next unicorn, but most apps never need to scale past the limit of a single very high-performance database. For most people, this single huge DB is sufficient.

    Also, for many (maybe even most) applications, designated outages for maintenance are not only acceptable, but industry standard. Banks have had, and continue to have designated outages all the time, usually on weekends when the impact is reduced.

    Sure, what I just wrote is bad advice for mega-scale SaaS offerings with millions of concurrent users, but most of us aren't building those, as much as we would like to pretend that we are.

    I will say that TWO of those servers, with some form of synchronous replication, and point in time snapshots, are probably a better choice, but that's hair-splitting.

    (and I am a dyed in the wool microservices, scale-out Amazon WS fanboi).

    • > I know we're all hot and bothered about getting our apps to scale up to be the next unicorn, but most apps never need to scale past the limit of a single very high-performance database. For most people, this single huge DB is sufficient.

      True if the reliability is good enough. I agree that many organisations will never get to the scale where they need it as a performance/data size measure, but you often will grow past the reliability level that's possible to achieve on a single node. And it's worth saying that the various things that people do to mitigate these problems - read replicas, WAL shipping, and all that - can have a pretty high operational cost. Whereas if you just slap in a horizontal autoscaling datastore with true master-master HA from day 1, you bypass all of that trouble and just never worry about it.

      > Also, for many (maybe even most) applications, designated outages for maintenance are not only acceptable, but industry standard. Banks have had, and continue to have designated outages all the time, usually on weekends when the impact is reduced.

      IME those are a minority of applications. Anything consumer-facing, you absolutely do lose out (and even if it's not a serious issue in itself, it makes you look bush-league) if someone can't log into your system at 5AM on Sunday. Even if you're B2B, if your clients are serving customers then they want you to be online whenever their customers are.

      1 reply →

  • > If you do this then you'll have the hardest possible migration when the time comes to split it up. It will take you literally years, perhaps even a decade.

    At which point a new OneBigServer will be 100x as powerful, and all your upfront work will be for nothing.

  • > Shard your datastore from day 1

    what about using something like cocroach from day 1?

    • I don't know the characteristics of bikesheddb's upstream in detail (if there's ever a production-quality release of bikesheddb I'll take another look), but in general using something that can scale horizontally (like Cassandra or Riak, or even - for all its downsides - MongoDB) is a great approach - I guess it's a question of terminology whether you call that "sharding" or not. Personally I prefer that kind of datastore over an SQL database.

      1 reply →

> Use One Big Database.

It’s never one big database. Inevitably there are are backups, replicas, testing environments, staging, development. In an ideal unchanging world where nothing ever fails and workload is predictable then the one big database is also ideal.

What happens in the real world is that the one big database becomes such a roadblock to change and growth that organisations often throw away the whole thing and start from scratch.

  • > It’s never one big database. Inevitably there are are backups, replicas, testing environments, staging, development. In an ideal unchanging world where nothing ever fails and workload is predictable then the one big database is also ideal.

    But if you have many small databases, you need

    > backups, replicas, testing environments, staging, development

    all times `n`. Which doesn't sound like an improvement.

    > What happens in the real world is that the one big database becomes such a roadblock to change and growth that organisations often throw away the whole thing and start from scratch.

    Bad engineering orgs will clutch defeat from the jaws of victory no matter what the early architectural decisions were. The one vs many databases/services is almost moot entirely.

Just FYI, you can have one big database, without running it on one big server. As an example, databases like Cassandra are designed to be scaled horizontally (i.e. scale out, instead of scale up).

https://cassandra.apache.org/_/cassandra-basics.html

  • There are trade-offs when you scale horizontally even if a database is designed for it. For example, DataStax's Storage Attached Indexes or Cassandra's hidden-table secondary indexing allow for indexing on columns that aren't part of the clustering/partitioning, but when you're reading you're going to have to ask all the nodes to look for something if you aren't including a clustering/partitioning criteria to narrow it down.

    You've now scaled out, but you now have to ask each node when searching by secondary index. If you're asking every node for your queries, you haven't really scaled horizontally. You've just increased complexity.

    Now, maybe 95% of your queries can be handled with a clustering key and you just need secondary indexes to handle 5% of your stuff. In that case, Cassandra does offer an easy way to handle that last 5%. However, it can be problematic if people take shortcuts too much and you end up putting too much load on the cluster. You're also putting your latency for reads at the highest latency of all the machines in your cluster. For example, if you have 100 machines in your cluster with a mean response time of 2ms and a 99th percentile response time of 150ms, you're potentially going to be providing a bad experience to users waiting on that last box on secondary index queries.

    This isn't to say that Cassandra isn't useful - Cassandra has been making some good decisions to balance the problems engineers face. However, it does come with trade-offs when you distribute the data. When you have a well-defined problem, it's a lot easier to design your data for efficient querying and partitioning. When you're trying to figure things out, the flexibility of a single machine and much cheaper secondary index queries can be important - and if you hit a massive scale, you figure out how you want to partition it then.

    • Cassandra was just an example, but most databases can be scaled either vertically or horizontally via sharding. You are right if misconfigured performance can be hindered, but this is also true for a database which is being scaled vertically. Generally speaking you will get better performance if you have a large dataset by growing horizontally then you would by growing vertically.

      https://stackoverflow.blog/2022/03/14/how-sharding-a-databas...

  • Cassandra may be great when you have to scale your database that you no longer develop significantly. The problem with this DB system is that you have to know all the queries before you can define the schema.

    • > The problem with this DB system is that you have to know all the queries before you can define the schema

      Not true.

      You just need to optimise your schema if you want the best performance. Exactly the same as an RDBMS.

A relative worked for a hedge fund that used this idea. They were a C#/MSSQL shop, so they just bought whatever was the biggest MSSQL server at the time, updating frequently. They said it was a huge advantage, where the limit in scale was more than offset by productivity.

I think it's an underrated idea. There's a lot of people out there building a lot of complexity for datasets that in the end are less than 100 TB.

But it also has limits. Infamously Twitter delayed going to a sharded architecture a bit too long, making it more of an ugly migration.

  • Server hardware is so cheap and fast today that 99% of companies will never hit that limit in scale either.

>"Use One Big Database."

I do, it is running on the same big (relatively) server as my native C++ backend talking to the database. The performance smokes your standard cloudy setup big time. Serving thousand requests per second on 16 core without breaking sweat. I am all for monoliths running on real no cloudy hardware. As long as the business scale is reasonable and does not approach FAANG (like for 90% of the businesses) this solution is superior to everything else money, maintenance, development time wise.

I agree with this sentiment but it is often misunderstood as a means to force everything into a single database schema. More people need to learn about logically separating schemas with their database servers!

Another area for consolidation is auth. Use one giant keycloak, with individual realms for every one of the individual apps you are running. Your keycloak is back ended by your one giant database.

I agree that 1BDB is a good idea, but having one ginormous schema has its own costs. So I still think data should be logically partitioned between applications/microservices - in PG terms, one “cluster” but multiple “databases”.

We solved the problem of collecting data from the various databases for end users by having a GraphQL layer which could integrate all the data sources. This turned out to be absolutely awesome. You could also do something similar using FDW. The effort was not significant relative to the size of the application.

The benefits of this architecture were manifold but one of the main ones is that it reduces the complexity of each individual database, which dramatically improved performance, and we knew that if we needed more performance we could pull those individual databases out into their own machine.

I'd say, one big database per service. Often times there are natural places to separate concerns and end up with multiple databases. If you ever want to join things for offline analysis, it's not hard to make a mapreduce pipeline of some kind that reads from all of them and gives you that boundless flexibility.

Then if/when it comes time for sharding, you probably only have to worry about one of those databases first, and you possibly shard it in a higher-level logical way that works for that kind of service (e.g. one smaller database per physical region of customers) instead of something at a lower level with a distributed database. Horizontally scaling DBs sound a lot nicer than they really are.

>>(they don't know how your distributed databases look, and oftentimes they really do not care)

Nor should they, it's the engineer's/team's job to provide the database layer to them with high levels of service without them having to know the details

>Use One Big Database.

It may be reasonable to have two databases e.g. a class a and class b for pci compliance. So context still deeply matters.

Also having a dev DB with mock data and a live DB with real data is a common setup in many companies.

I'm pretty happy to pay a cloud provider to deal with managing databases and hosts. It doesn't seem to cause me much grief, and maybe I could do it better but my time is worth more than our RDS bill. I can always come back and Do It Myself if I run out of more valuable things to work on.

Similarly, paying for EKS or GKE or the higher-level container offerings seems like a much better place to spend my resources than figuring out how to run infrastructure on bare VMs.

Every time I've seen a normal-sized firm running on VMs, they have one team who is responsible for managing the VMs, and either that team is expecting a Docker image artifact or they're expecting to manage the environment in which the application runs (making sure all of the application dependencies are installed in the environment, etc) which typically implies a lot of coordination between the ops team and the application teams (especially regarding deployment). I've never seen that work as smoothly as deploying to ECS/EKS/whatever and letting the ops team work on automating things at a higher level of abstraction (automatic certificate rotation, automatic DNS, etc).

That said, I've never tried the "one big server" approach, although I wouldn't want to run fewer than 3 replicas, and I would want reproducibility so I know I can stand up the exact same thing if one of the replicas go down as well as for higher-fidelity testing in lower environments. And since we have that kind of reproducibility, there's no significant difference in operational work between running fewer larger servers and more smaller servers.

"Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care)."

This isn't a problem if state is properly divided along the proper business domain and the people who need to access the data have access to it. In fact many use cases require it - publicly traded companies can't let anyone in the organization access financial info and healthcare companies can't let anyone access patient data. And of course are performance concerns as well if anyone in the organization can arbitrarily execute queries on any of the organization's data.

I would say YAGNI applies to data segregation as well and separations shouldn't be introduced until they are necessary.

  • "combine these data sources" doesn't necessarily mean data analytics. Just as an example, it could be something like "show a badge if it's the user's birthday", which if you had a separate microservice for birthdays would be much harder than joining a new table.

    • Replace "people" with "features" and my comment still holds. As software, features, and organizations become more complex the core feature data becomes a smaller and smaller proportion of the overall state and that's when microservices and separate data stores become necessary.

At my current job we have four different databases so I concur with this assessment. I think it's okay to have some data in different DBs if they're significantly different like say the user login data could be in its own database. But anything that we do which is a combination of e-commerce and testing/certification I think they should be in one big database so I can do reasonable queries for information that we need. This doesn't include two other databases we have on-prem which one is a Salesforce setup and another is an internal application system that essentially marries Salesforce to that. It's a weird wild environment to navigate when adding features.

> Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).

I'm not sure how to parse this. What should "asks" be?

  • The phrase "Your product asks will consistently " can be de-abbreviated to "product owners/product managers you work with will consistently request".

  • The feature requests (asks) that product wants to build - sorry for the confusion there.

Mostly agree, but you have to be very strict with the DB architecture. Have very reasonable schema. Punish long running queries. If some dev group starts hammering the DB cut them off early on, don't let them get away with it and then refuse to fix their query design.

The biggest nemesis of big DB approach are dev teams who don't care about the impact of their queries.

Also move all the read-only stuff that can be a few minutes behind to a separate (smaller) server with custom views updated in batches (e.g. product listings). And run analytics out of peak hours and if possible in a separate server.

The rule is: Keep related data together. Exceptions are: Different customers (usually don't require each others data) can be isolated. And if the database become the bottleneck you can separate unrelated services.

Surely having separate DBs all sit on the One Big Server is preferable in many cases. For cases where you really to extract large amounts of data that is derived from multiple DBs, there's no real harm in having some cross-DB joins defined in views somewhere. If there are sensible logical ways to break a monolithic service into component stand-alone services, and good business reasons to do (or it's already been designed that way), then having each talk to their own DB on a shared server should be able to scale pretty well.

Not to mention, backups, restores, and disaster recovery are so much easier with One Big Database™.

  • How is backup restoration any easier if your whole PostgreSQL cluster goes back in time when you only wanted to rewind that one tenant?

    • Your scenario is data recovery, not backup restoration. Wildly different things.

If you get your services right there is little or no communications between the services since a microservice should have all the data it needs in it's own store.

> they don't know how your distributed databases look, and oftentimes they really do not care

Nor should they.