Comment by embedding-shape

21 hours ago

I've tried for 15 years to have my homelab, but always get lost in the complexity after a year or so, in the past. About 3 years ago I gave NixOS a try instead for managing everything, which suddenly made everything easier (counter-intuitively perhaps) as now I can come back after months and still understand where everything is and how it works after just reading.

Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.

I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)

Just to echo what others are saying: NixOS and Proxmox are the answer.

I run both right now, but I am in the process of just running NixOS on everything.

NixOS really is that good, particularly for homelabs. The module system and ability to share them across machines is really a superpower. You end up having a base config that all machines extend essentially. Same idea applies to users and groups.

One of the other big benefits, particularly for homelabs, is that your config is effectively self-documenting. Every quirk you discover is persisted in a source controlled file. Upgrades are self-documenting too: upstream module maintainers are pretty good about guiding you towards the new way to do things via option and module deprecation.

  • I mean this in a good way, but I'm slightly chuckling to myself that it reads like people are just discovering IaC...on HN. That's all Nix configs are, at the end of the day.

    No matter the tool, manage your environment in code, your life becomes much easier. People start and then get addicted to the ClickOps for the initial hit and then end up in a packed closet with a one way ticket to Narnia.

    This happens in large environments too, so not at all just a home lab thing.

    • I and many other NixOS users know what IaC is :)

      A NixOS config is a bit different because it’s lower level and is configuring the OS through a first-party interface. It is more like extending the distro itself as opposed to configuring an existing distro after the fact.

      The other big difference is that it is purely declarative vs. a simulation of a declarative config a la Ansible and other tools. Again, because the distro is config aware at all levels, starting from early boot.

      The last difference is atomicity. You can (in theory) rely on an all or nothing config switch as well as the ability to rollback at any time (even at boot).

      On top of all this are the niceties enabled by Nix and nixpkgs. Shared binary caches, run a config on a VM, bake a live ISO or cloud VM image from a config (Packer style), the NixOS test framework, etc.

Unless you actually need hardware (local LLM host, massive data transformation jobs), it is also easy to get into the many machines trap. A single old laptop, N97, optiplex, etc sitting in a corner is actually a huge amount of computer power that will rival most cloud offerings. Single machine can do so much.

  • Yeah true. I have an old Asus X550L from 2014, a very budget / mid basic home laptop with the battery removed running as my server. I do some dev on it with VSCode remoting into it and Claude Code, run Jellyfin, Audiobookshelf, Teamspeak, IRC and TS bots, nginx, SyncThing and some static websites.

    I'm still usually under 10% cpu usage and at 25% ram usage unless I'm streaming and transcoding with Jellyfin.

    It's been fun and super useful. Almost any old laptop from the past 15 years could run and solve several home computing needs with little difficulty.

Yup this is what I've got up and running recently and it's been awesome.

My setup is roughly the following.

- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.

- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.

- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.

- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:

  - Forgejo runner creates NixOS builds => Deploy VMs via Proxmox API => Deploy containers via Komodo API

- I've got separate VMs for

  - gateway for reverse-proxy & authentication

  - monitoring with prometheus/loki/grafana stack

  - general use applications

Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.

All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.

Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.

I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.

Thanks. Yeah, I've probably been overcomplicating it before. I was running Kubernetes on Talos thinking that at least it would be familiar. Such power tools for running simple workloads on a single node is inviting headaches.

Yeah this is the way.

The problem is that people never stop tinkering and keep trying to make their homelab better, faster, etc. But its purpose is not to be a system that you keep fine tuning (unless thats what you actually are doing it for), its purpose is to serve your needs as a homelab.

The best homelabs are boring in terms of tech stacks imo. The unfortunate paradox is that once you do start getting into homelabs, its hard to get out of the mentality of constantly trying out new stuff.

Maybe my needs are simpler. But I just made do with systemd services and apt (debian). I've also setup Incus for the occasional software testing and playing around. After using OpenBSD as a daily driver, I'm more keen with creating a native package for the OS/Distro than wrangling docker compose files.

  • Yea, it's always weird to see people say "I'm simplifying my life and reducing my cloud dependencies by running a homelab and self-hosting!" and then they list the dozens of alphabet soup software they're running on it that they're now relying/depending on. "Oh I run 20 VMs and containers and Docker orchestration and Nextcloud and Syncthing and Jellyfin and Plex and Forgejo and Komodo and Home Assistant and Immich and Trilium and Audiobookshelf and another Nextcloud and This Stack and That Pipeline" and oh my god haven't you really just made your computing even worse?

    My "homelab" is basically Linux + NFS, with standard development tools.

    • Depends on your requirements, I'm jealous you can get away with something so simple, I cannot, and I also have poor memory so having it described in code been most helpful, if I ssh into a server after months of not touching it I barely remember what's on it anymore.

      I think the most important thing for me is that I chose when I have time to upgrade, it's no longer forced upon me, that's why I prefer to depend on myself rather than 3rd party services for things that are essential. Been so many times I've had to put other (more important) things on hold because some service somewhere decided that they're gonna change something, and to get stuff working again you need to migrate something. Just got so tired of not being in control of that schedule.