← Back to context

Comment by adamtulinius

1 day ago

If you spin up Kubernetes for "a couple of containers to run your web app", I think you're doing something wrong in the first place, also coupled with your comment about adding SDN to Kubernetes.

People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes.

It depends what you're doing it.

My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup.

Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ).

Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere.

So you really have three obvious use cases:

a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.

However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work.

  • Call me old fashion but I prefer tools like Dokploy that make deployment across different VPS extremely easy. Dokploy allows me to utilize my home media server, using local instances of forgejo to deploy code, to great effect.

    k8s appears to be a corporate welfare jobs program where trillion dollar multinational monopolistic companies are the only ones who can collectively spend 100s of millions sustaining. Since most companies aren't trillion dollar monopolies, adopting such measures seems extremely poor.

    All it signals to me is that we have to stop letting SV + VC dictate the direction of tech in our industry, because their solutions are unsustainable and borderline useless for the vast majority of use cases.

    I'll never forget the insurance companies I worked at that orchestrated every single repo with a k8s deployment whose cloud spend was easily in the high six figures a month to handle a work load of 100k/MAU where the concurrent peak never went more than 5,000 users, something the company did know with 40 years of records. Literally had a 20 person team whose entire existence was managing the companies k8s setup. Only reason the company could sustain this was that it's an insurance company (insurance companies are highly profitable, don't let them convince you otherwise; so profitable that the government has to regulate how much profit they're legally allowed to make).

    Absolute insanity, unsustainable, and a tremendous waste of limited human resources.

    Glad you like it for your node app tho, happy for you.

    • K8s is just a standardized api for running "programs" on hardware, which is a really difficult problem it solves fairly well.

      Is it complex? Yes, but so is the problem it's trying to solve. Is its complexity still nicer and easier to use than the previous generation of multimachine deployment systems? Also yes.

      4 replies →

    • Just as a quick aside, I tried Coolify, Dokploy, Dockge, and Komodo, and if you're trying to do a Heroku-style PaaS, Dokploy is really good. Hands down the best UX for delivering apps & databases. It's too bad about the licensing. (e.g. OIDC + audit logs behind a paid enterprise license.)

      Coolify is full of features, but the UX suffers and they had a nasty breaking bug at one point (related to Traefik if you want to search it.) Dockge is just a simple interface into your running Docker containers and Komodo is a bit harder to understand/come up with a viable deployment model, and has no built-in support for things like databases.

      6 replies →

    • I took over tech for a POS company some years ago. They were a .net shop with about 80 developers, less than 200 concurrent connections, 6 figures spend cloud, and 0 nines uptime with a super traditional setup.

      Point being, it's not the tools the causes the probem.

      4 replies →

  • > I'd argue the k8s APIs and interfaces are better than trying to do this on AWS

    I think Amazon ECS is within striking distance, at least. It does less than K8S, but if it fits your needs, I find it an easier deployment target than K8S. There's just a lot less going on.

    • I ran renderapp in ECS before I ran it in k8s.

      The deployment files / structure were mostly equivalent with the main differences being I can't shell into ECS and I lose kubectl in favour of looking at the AWS GUI ( which for me is a loss, for others maybe not ).

      The main difference is k8s has a lot of optionality, and folks get analysis paralysis with all the potential there. You quickly hit this in k8s when you have to actually need the addon to get cloudwatch logs.

      This is also where k8s has sharp edges. Since amazon takes care of the rest of the infrastructure for you in ECS, you don't really need to worry about contention and starving node resources resulting in killing your logging daemon, which you could technically do in k8s.

      However, you'll note that this is a vendor choice. EKS Auto Mode does away with most of the addons you need to run yourself, simplifying k8s, moving it significantly closer to a vendor supported solution.

      3 replies →

  • Totally, it's all about the primitives. I'm curious where exe.dev is gonna build on the the base, or just leave it up to folks to add all their own bespoke stuff to do containers, logs, etc.

    The last 20 years has given us a lot of great primitives for folks to plug in, I think that lots of people don't want to wrangle those primitives, they just want to use them.

  • > a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.

    This is well put and it's very similar to the arguments made when comparing programming languages. At the end of the day you can accomplish the same tasks no matter which interface you choose.

    Personally I've never found kubernetes that difficult to use[1]. It has some weird, unpredictable bits, but so does sysvinit or docker, that just ends up being whatever you're used to.

    [1] except for having to install your own network mesh plugin. That part sucked.

Depends. For personal projects, yeah definitely. But at work? Typically the “Platform” team can only afford to support 1 (maybe 2) ways of deployment, and k8s is quite versatile, so even if you need 1 small service, you’ll go with the self-service-k8s approach your Platform team offers. Because the alternative is for you (or your team) to own the whole infrastructure stack for your new deloyment model (ecs? lambda? Whatever): so you need to setup service accounts, secret paths, firewalls, security, pipelines, registries, and a large etc. And most likely, no one will give you access rights for all of that , and your PM won’t accept the overhead either.

So having everyone use the same deployment model (and that’s typically k8s) saves effort. I don’t like it for sure

  • This is where I'm at. Using Podman daily to run Python scripts and apps and it's been going great! However trying to build things like monitoring, secure secret injection, centralized inventory, remote logging, etc. has fallen on us. Has lead to some shadow IT (running our own container image registry, hashicorp vault instance, etc.) which makes me hesitant to share with others in the company how we're operating.

    I like to think if we had a K8s environment a lot of this would be built out within it. Having that functionality abstracted away from the developer would be a huge win in my opinion.

I totally agree, but that's not what happens in reality: the average devops knows k8s and will slap it onto anything they see (if only so they can put in on their resume). The average manager hears about k8s, gets convinced they need and hires beforementioned devops to build it.

  • > the average devops knows k8s and will slap it onto anything they see

    This is certainly the case from all the third person accounts I hear. Online. I never actually met a single one that is like that, if anything, those same people are the ones that are first to tell me about their Hetzner setups.

    • DevOps here.

      The trouble is that we are literally expected to do this everywhere we go. I've personally advocated for approaches which use say, a pair of dedicated servers, or VMs as in GPs example. If you want it outside of AWS/GCP/Azure, you're regarded as a crazy person. If you don't adopt "best practices" (as defined by vendors) then management are scared. Management very often trust the sales and marketing departments of big vendors more than their own staff. Many of us have given up fighting this, because what it comes down to is a massive asymmetry of information and trust.

      12 replies →

  • And the average developer doesn't even know where to start to deploy things in prod. When the feature product asks passes QA... to the next sprint! we are done!

    • Whose responsibility is it to establish the prerequisite CICD pipelines, HITL workflows, and Observability infr in order for devs to shepherd changes to prod (and track their impact)? Hint: it's not the developer's.

      5 replies →

  • > the average devops knows k8s

    If you'd know Kubernetes, you know not to use it. I say that as someone who used to do consulting for it.

    The reality is that yet again "making money" completely collides with efficient, quality, sane productive work.

    For me one of the main reasons to leave that space is that I couldn't really deal with the fact that my work collides with a client's success. That said I have helped to get off that stuff and other things that they thought they needed, that just wasted time and money. It just feels odd going into a company that hired you to consult on a topic only to end up telling them "The best approach for you is not doing that at all". Often never. Like some people thought "Well, if we have hundreds of thousands or even millions of users" and the reality was that even in these scenarios if you went away from that abstract thought and discussed a hypothetical based on their product they realized that they'd still be better off without it. Besides the fact that this hypothetical often was in a future that made it likely that they said they'd likely have completely different setup so preparing for that didn't even make sense.

    I think a big thing related to that was/is the microservice craze where people end up moving to a complex architecture for not many good reasons and then they increase complexity way faster than what they actually deliver in terms of the product, because it somehow feels good. I know it does, I've been there. When in reality the outcome often is just a complex mess with what could have been a relatively simple monolith. And these monoliths do work. And in the vast majority of cases they are easy to scale, because your problem switches from "how do we best allocate that huge amount of very different services across our infrastructure" to (for the most part) "how do we spin up our monolith on one more server" which tends to be a way easier to tackle service.

    And nothing stops you from still using everything else if you want. Just because it's a monolith doesn't mean you need to skip on any of the cloud offerings, etc. For some reason there seems to be that idea that if you write a monolith you are somehow barred from using modern tooling, infrastructure, services, etc. Not sure where that comes from.

    • I think one big problem is that using microservice architecture doesn't mean that literally everything has to be a "microservice". if you don't truly need granual scaling (i.e. your "app" doesn't get a bunch of asymmetric loads across different paths), then you can just have more monolithic "microservices" until they need to be split up

      imo this should achieve a nice balance?

      1 reply →

In some sense, Kubernetes is just a portable platform for running Linux services, even on a single node using something like K3s. I almost see it as being an extension of the Linux OS layer.

  • This is what I do for small stuff, debian vm, k3s on it for a nicer http based deployment api.

  • Then why can't we put a wrapper onto systemd and make that into a light weight k8s?

  • Yep, this is the way. Linux is just a platform for running services on one or more computers without needing to know about those computers individually, and even if your scale is 1, it's often easier to install k3s and manage your services with it rather than memorizing a bunch of disparate tools with their own configuration languages, filepath conventions, etc. It's just a lot easier to use k3s than it is to cobble together stuff with traditional linux tools. It's a standard, scalable pane of glass and as much as I may dislike kubectl, it's worlds better than systemctl and journalctl and the like.

I know that "resume-driven development" exists, where the tradeoffs between approaches aren't about the technical fit of the solution but the career trajectory. I've seen people making plain workstation preparation scripts using Rust, only to have something to flex about in interviews.

I'm not surprised even in the slightest that DevOps workers will slap k8s on everything, to show "real industry experience" in a job market where the resume matches the tools.

  • Your first example sound very sensible to me?

    Using new technology in something small and unimportant like a setup script is a perfect way to experiment and learn. It would be irresponsible to build something important as the first thing you do in a new language.

    • For your own use, yes.

      But if you're working with others, you should default to using standard industry tools (absent a compelling reason not to) because your work will be handed off to others and passed on to new team members. It's unreasonable to expect that a new Windows or Linux sysadmin or desktop support tech must learn Rust to maintain a workstation setup workflow.

    • agreed. I think if we all went with this HN mindset of "html4 and PHP work just fine" we wouldn't have gone anywhere with regards to all the technical advancements we enjoy today in the software space

  • We are building a religion, we are building it bigger We are widening the corridors and adding more lanes We are building a religion, a limited edition We are now accepting coders linking new AI brains

    (Apologies to Cake. And coders.)

  • there are alsp people with devops title that do not know anything else than the hammer, and then everything is a hammer problem.

    I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.

    • > there are alsp people with devops title that do not know anything else than the hammer, and then everything is a hammer problem.

      To be fair though, that's true for every profession or skill.

      > I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.

      I've seen something similar where people were surprised that you can use an object storage (so effectively "make HTTP requests") from every server.

    • Conversely, we had millions of server huggers before, who each knew their company's stuff in a way that wasn't really applicable if they went somewhere else.

      Every company used to have a bespoke collection of build, deployment, monitoring, scaling, etc concerns. Everyone had their own practices, their own wikis to try to make sense of what they had.

      I think we critically under-appreciate that k8s is a social technology that is broadly applicable. Not just for hosting containers, but as a cloud-native form of thinking, where it becomes much easier to ask: what do we have here, and is it running well, and to have systems that are helping you keep that all on track (autonomic behavior/control loops).

      I see such rebellion & disdain for where we are now, but so few people who seem able to recognize and grapple with what absolute muck we so recently have crawled out of.

> People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes.

This is a problem I've run into enterprise deployments. K8s is often the lowest common denominator semi small platform engineering teams arrive on. At my current employer, a platform managed K8s namespace is the only thing we got in terms of PaaS offering, so it is what we use. Is it overpowered? Yes. Is it overly complex for our usecase? Definitely. Could we basically get by hosting our services on a few cheap mini computers with no performance penalty? Also yes.

Doing Kubernetes like doing Agile is mandatory nowadays. I've been asked to package a 20 line worth of bash script as docker image so it can be delivered via CI/CD pipeline via Kubernetes pods in cloud.

Value is not that I got job done at a day's notice. It is black mark that I couldn't package it as per industry best practices.

Not doing would mean out of job/work. Whether it is happening correctly is not something decision makers care as long it is getting done anyhow.

  • In my 20+ years in the industry, I've been at one company which really did Agile, and that was the one I started with.

    Everyone else is communicating they are doing Agile while being very far away from it ;)

    • if anyone knew what agile is maybe more would have a chance if making it work (it won’t). in my 30* the only “process” that worked and works is “hire the right people and get the F out of the way.”

      1 reply →

  • It depends on your situation of course, but there are a lot of good reasons to package up that bash script and run it through the pipeline. If everyone does some backdoor deployment of their snowflake shell script that's not great. It doesn't matter if it's 20 lines or 2 lines.

  • There are many organizations which still ship software without Kubernetes. Perhaps even the vast majority.

    • Of course. I used to think I am working for one such organization for long time. Until leadership decided "modernization" as top priority for IT teams as we are lagging far.

  • I don't think there are any other industry best practices you could have followed.

    That's basically why k8s is so compelling. It's tech is fine but it's a social technology that is known and can be rallied behind, that has consistent patterns that apply to anything you might dream of making "cloud native". What you did to get this script available for use will closely mirror how anyone else would also get any piece of software available.

    Meanwhile conventional sys-op stuff was cobbling together "right sized" solutions that work well for the company, maybe. These threads are overrun with "you might not need k8s" and "use the solution that fits your needs", but man, I pity the companies doing their own frontiers-ing to explore their own bespoke "simple" paths.

    I do think you are on to something with there not being food taste making, with not good oversight always.

We have a hobby web based app that consists of multiple containers. It runs in docker compose. Serves 1000 users right now (runs 24/7). Single VM.

No Kubernetes whatsoever.

I agree with you.

  • Docker compose is brilliant while your stack remains on a single box, and will scale quite nicely for some time this way for most applications with minimum maintenance overhead.

    My personal strategy has always been to start off in docker compose, and break out to a k8s configuration later if I have to start scaling beyond single box.

> it sounds like you don't have the scale for actually running Kubernetes.

You don't set up k8s because your current load can't be handled, you do for future growth. Sometimes that growth doesn't pan out and now you're left with a complex infrastructure that is expensive to maintain and not getting any of the benefit.

k8s is useful when you have services that must spin up and down together, and you want to swap out services and deploy all/some/one.

and then also package this so that you and other developers can get the infrastructure running locally or on other machines.

They use it for inflating their resume for career progression rather than actually evaluating if they need it in the first place.

This is why you get many folks over-thinking the solution and picking the most hyped technologies and using them to solve the wrong problems without thinking about what they are selling.

You don't need K8s + AWS EC2 + S3 just to host a web app. That tells me they like lighting money on fire and bankrupting the company and moving to the next one.

  • Often the alternatives presented as cheaper to me in discussions are actually burning money.

    But given how I always see "you don't need k8s because you're not going to scale so fast" I am feel like even professional k8s operators have missed the fundamental design goals of it :/ (maximizing utilization of finite compute)

Even if using just one VM, I'll probably slap k3s on it and manage my application using manifests. It's just so much easier than dealing with puppet or chef or vanilla cloud-init. Docker compose works too, but at that point it's just easier to stick with k3s and then I can have nice things like background jobs, a straightforward path to HA, access to an ecosystem of existing software, and a nicer CLI.

  • Thats what I don't get when people bring up this idea k8s is complicated.

    All of those other tools are complicated and fragile

    • I think the things that trip people up are:

      1. People expect k8s to be an opinionated platform and it's very happy to let you make a mess

      2. People think k8s is supposed to be a cross platform portability layer and ... it maybe can be if you're very careful, but it's mostly not that

      3. People compare k8s/cloud/etc to some monolithic application with admin permissions to everything and they compare that to the "difficulty" of dealing with RBAC/IAM/networking/secrets management

      4. People don't realize how much more complicated vanilla Linux tooling and how much more accidental complexity is involved

yeah it's like wanting to drive to the mall in the Space Shuttle and then complaining how its too complicated

The problem with Kubernetes is that it doesn't scale down to small deployments very well, but it sure as shit doesn't scale up to large ones either. Large shared multi-tenant clusters have massive problems even when running parts of the same application with the same incentives, it falls apart completely when the tenants are diverse.

Nomad has neither of these problems.

I have nom doubt that there are legit use cases for something like k8s at Google or other multi-billion companies.

But if its use was confined to this use case, pretty much nobody would be using it (unless as a customer of the organization's infra) and barely would be talking about it (like how there isn't too much talk about Borg).

The reason k8s is a thing in the first place is because it's being used by way too many people for their own goods. (Most people having worked in startups have met too many architecture astronauts in our lives).

If I had to bet, I'd wager that 99% of k8s users are in the “spin a few containers to run your web app” category (for the simple reason that for one billion-dollar tech business using it for legit reasons, there's many thousands early startups who do not).

  • The legit use case for companies like Google/Amazon etc is only to sell it to customers. None of these companies use K8s internally for real critical workloads.