← Back to context

Comment by otterley

1 day ago

They said in the article that they were running up to 200 pods at a time. Doing some back of the envelope math, 200 pods at $300,000 year is about $0.17/hour, which is exactly what an EC2 c5.xlarge costs per hour (on demand). That has 4 vCPUs, so about 800 vCPUs during peak, with $0.0425/CPU-hour.

I do have some questions like:

* Did they estimate cost savings based on peak capacity, as though it were running 24x7x365?

* Did they use auto scaling to keep costs low?

* Were they wasting capacity by running a single-threaded app (Node-based) on multi-CPU hardware? (My guess is no, but anything is possible)

This is a helpful breakdown, thanks, @otterley.

It is, by orders of magnitude, larger than any deployment that I have been a part of in my work experience, as a 10-year data scientist/Python developer.

  • This is larger than the resources I have available at Medium-Size-Fabless-Semi-Inc, and larger than the time I had two racks of C++ build farm. It is of course way larger than StackOverflow, which ran for years on two large machines.

    All for .. a meta-SaaS?