Comment by hardwaresofton

3 years ago

Something like this is on my roadmap, would you mind telling me a bit about the metrics and the scale you'd expect? Would you expect always on or more of an ephemeral container?

> Something like this is on my roadmap,

Superb!

> would you mind telling me a bit about the metrics and the scale you'd expect?

It will depend mostly on what the service offers.

If the service only supports running a single isolated container without any scaling whatsoever then it would be helpful if we could monitor básico stuff like CPU and memory utilization, and also network traffic, free disk space, and also disk IO. If the service supports auto-scaling then it would be helpful to track all resource utilization rates along with all alarms and events involved. Auto-scaling also implies load balancing thus if that's the case then it would also be helpful to track the basic load balancing indicators, as well as request logs.

In the end it really depends on what services you're planning on offering, and how you'll charge for it. As a user I would need to monitor any metric which is directly and indirectly involved in determining cost, and on top of that I need to monitor performance.

> Would you expect always on or more of an ephemeral container?

The most pressing need would be always on containers to be able to go the lift-and-shift onboarding route to managed services, but ephemeral containers sound like function-as-a-service and those are pretty exciting as well.

  • Thanks for this incredibly detailed answer! All these points make a ton of sense.

    Free disk space would imply elastic block storage or something similar so I’ll need to give that a think!

    V1 is very likely to be always on so great that it’s the core use case for you!

AWS Fargate / GCP Cloud run

Upload a docker image, specify container size (1cpu 2gb)

go live

scale from 1rps to 1000 rps any time

stateless

pay per request or pay per container

  • Combining what the original comment was, they also want some AppRunner-style ergonomics -- I'd like to see just how much of cloudwatch/monitoring would be expected to be available. Basic things like CPU and memory aren't too hard but it really does depend on how much of the VM (firecracker/etc) one would expect to be able to see, as well as higher level metrics like (RPS/Errors, etc).

    AppRunner, Fargate and Cloud Run have different ergonomics and specifics but thanks for this outline. v1 is likely to be pay per container but other than that feels doable.