Comment by jesse__

11 hours ago

Yeah man, you're running on a multitasking OS. Just let the scheduler do the thing.

Yeah this. As I explain many times to people, processes are the only virtualisation you need if you aren’t running a fucked up pile of shit.

The problem we have is fucked up piles of shit not that we don’t have kubernetes and don’t have containers.

  • Containers are just processes plus some namespacing, nothing really stops you from running very huge tasks on Kubernetes nodes. I think the argument for containers and Kubernetes is pretty good owing to their operational advantages (OCI images for distributing software, distributed cron jobs in Kubernetes, observability tools like Falco, and so forth).

    So I totally understand why people preemptively choose Kubernetes before they are scaling to the point where having a distributed scheduler is strictly necessary. Hadoop, on the other hand, you're definitely paying a large upfront cost for scalability you very much might not need.

    • Time to market and operational costs are much higher on kubernetes and containers from many years of actual experience. This is both in production and in development. It’s usually a bad engineering decision. If you’re doing a lift and shift, it’s definitely bad. If you’re starting greenfield it makes sense to pick technology stacks that don’t incur this crap.

      It only makes sense if you’re managing large amounts of large siloed bits of kit. I’ve not seen this other than at unnamed big tech companies.

      99.9% of people are just burning money for a fashion show where everyone is wearing clown suits because someone said clown suits are good.

      2 replies →

  • Hahhah, yuuuup.

    I can maybe make a case for running in containers if you need some specific security properties but .. mostly I think the proliferation of 'fucked up piles of shit' is the problem.

  • Disagree.

    Different processes can need different environments.

    I advocate for something lightweight like FreeBSD jails.

Its all fun and games, until the control plane gets killed by the OOMkiller.

Naturally, that detaches all your containers. And theres no seamless reattach for control plane restart.

  • Or your CNI implementation is made of rolled up turds and you lose a node or two from the cluster control plane every day.

    (Large EKS cluster)

Until you need to schedule GPUs or other heterogenous compute...

  • Are you saying that running your application in a pile of containers somehow helps that problem ..? It's the same problem as CPU scheduling, we just don't have good schedulers yet.. Lots of people are working on it though