Comment by frje1400
4 days ago
> If your writes are fast, doing them serially does not cause anyone to wait.
Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?
Because development and maintenance faster and easier to reason about. Increasing the chances you really get to 86 million daily active users.
So in this solution, you run the backend on a single node that reads/writes from an SQLite file, and that is the entire system?
Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.
Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.
2 replies →