Comment by mannyv
7 hours ago
It depends on you understanding your app and how things need to be structured. We have what essentially is a video CMS, so we have two parts: a management UI that end-users use and a backend that actually delivers the video and collects metrics.
They are essentially two products, and are designed that way; if the management UI barfed the backend would continue along forever.
You can combine management and delivery in one app, but that makes delivery more fragile and will be slower because presumably it has to invoke a lot of useless stuff just to deliver bytes. I remember working with a spring app that essentially built and destroyed the whole spring runtime just to serve a request, which was an unbelievably dumb thing to do. Spring became the bottleneck, and for most requests there was actually no work done; 99% of the time was in spring doing spring things.
So really, once you separate the delivery and management it becomes easier to figure out the minimum amount of stuff you need. Redis, because you need to cache a bunch of metadata and handle lots of connections. Mysql, because you need a persistent store. Lambda, as a thin layer between everything. And a CDN, because you don't want to serve stuff out of AWS if you can help it. SQS for what essentially becomes job control. And for metric collection we use fastly with synthetic logging.
To be fair, our AWS cost was low but our CDN cost is like $1800/mo for some number of PB/mo (5? 10? I forget).
In the old days this would require at least (2 DB + 2 App server + 2 NAS) * 2 locations = 8 boxes. If we were going to do the networking ourselves we'd add 4 f5s. Ideally we'd have the app server, redis, and the various lambdas on different boxes, so 2 redis + 2 runners = 8 more servers. If we didn't use f5s we'd have 2 reverse proxies as the front end at each location. Each box would have 2 PSUs, at least a raid 1, dual NICs, and ECC. I think the lowest end Dell boxes with those features are like $5k each? Today I'd probably just stuff some 1TB SSDs in them and mirror them instead of going SAS. The NAS would be hard to spec because you have to figure out how much storage you need and they can be a pain to reconfigure. You don't want to spend too much up front, but you also don't want to have downtime while you add some more drive space.
Having built this out, it's not as easy as you'd think. I've been lucky enough to have built this sort of thing a few times. It's fun to do, but maintaining it can be a PITA. If you don't believe in documentation your deployment will fail miserably because you did something out of order.
No comments yet
Contribute on Hacker News ↗