← Back to context

Comment by stackskipton

8 hours ago

This is one of those ideas that sounds great and appears simple at first but can grow into mad monster. Here my potential thoughts after 5 minutes.

Kyverno requirement makes it limited. There is no "automatic-zone-placement-disabled" function in case someone wants to temporarily disable zone placement but not remove the label. How do we handle RDS Zone changing after workload scheduling? No automatic look up of IPs and Zones. What if we only have one node in specific zone? Are we willing to handle EC2 failure or should we trigger scale out?

> Kyverno requirement makes it limited.

You don't have to use Kyverno. You could use a standard mutating webhook, but you would have to generate your own certificate and mutate on every Pod.CREATE operations. Not really a problem but, it depends.

> There is no "automatic-zone-placement-disabled"

True. Thats why I choose to use preferredDuringSchedulingIgnoredDuringExecution instead of requiredDuringSchedulingIgnoredDuringExecution. In my case, where this solutions originated from, Kubernetes was already a multi AZ solution where there was always at least one node in each AZ. It was nice if the Pod could be scheduled into the same AZ, but it was not a hard requirement,

> No automatic look up of IPs and Zones. Yup, it would generate a lot of extra "stuff" to mess with: IAM Roles, how to lookup IP/subnet information from multi account AWS setup with VPC Peerings. In our case it was "good enough" with a static approach. Subnet/network topology didnt change frequently enough to add another layer of complexity.

> What if we only have one node in specific zone?

Thats why we defaulted to preferredDuringSchedulingIgnoredDuringExecution and not required.